sentences
sequence
labels
sequence
[ "End-to-end approaches for sequence tasks are becoming increasingly popular.", "Yet for complex sequence tasks, like speech translation, systems that cascade several models trained on sub-tasks have shown to be superior, suggesting that the compositionality of cascaded systems simplifies learning and enables sophisticated search capabilities.", "In this work, we present an end-to-end framework that exploits compositionality to learn searchable hidden representations at intermediate stages of a sequence model using decomposed sub-tasks.", "These hidden intermediates can be improved using beam search to enhance the overall performance and can also incorporate external models at intermediate stages of the network to re-score or adapt towards out-of-domain data.", "One instance of the proposed framework is a Multi-Decoder model for speech translation that extracts the searchable hidden intermediates from a speech recognition sub-task.", "The model demonstrates the aforementioned benefits and outperforms the previous state-of-the-art by around +6 and +3 BLEU on the two test sets of Fisher-CallHome and by around +3 and +4 BLEU on the English-German and English-French test sets of MuST-C.", "1 1 Introduction The principle of compositionality loosely states that a complex whole is composed of its parts and the rules by which those parts are combined (Lake and Baroni, 2018).", "This principle is present in engineering, where task decomposition of a complex system is required to assess and optimize task allocations (Levis et al., 1994), and in natural language, where paragraph coherence and discourse analysis rely on decomposition into sentences (Johnson, 1992; Kuo, 1995) and sentence level semantics relies on decomposition into lexical units (Liu et al., 2020b).", "1 All code and models are released as part of the ESPnet toolkit: https://github.com/espnet/espnet .", "Similarly, many sequence-to-sequence tasks that convert one sequence into another (Sutskever et al., 2014) can be decomposed to simpler sequence subtasks in order to reduce the overall complexity.", "For example, speech translation systems, which seek to process speech in one language and output text in another language, can be naturally decomposed into the transcription of source language audio through automatic speech recognition (ASR) and translation into the target language through machine translation (MT).", "Such cascaded approaches have been widely used to build practical systems for a variety of sequence tasks like hybrid ASR (Hinton et al., 2012), phrase-based MT (Koehn et al., 2007), and cascaded ASR-MT systems for speech translation (ST) (Pham et al., 2019).", "End-to-end sequence models like encoder-decoder models (Bahdanau et al., 2015; Vaswani et al., 2017), are attractive in part due to their simplistic design and the reduced need for hand-crafted features.", "However, studies have shown mixed results compared to cascaded models particularly for complex sequence tasks like speech translation (In-aguma et al., 2020) and spoken language understanding (Coucke et al., 2018).", "Although direct target sequence prediction avoids the issue of error propagation from one system to another in cascaded approaches (Tzoukermann and Miller, 2018), there are many attractive properties of cascaded systems, missing in end-to-end approaches, that are useful in complex sequence tasks.", "In particular, we are interested in (1) the strong search capabilities of the cascaded systems that compose the final task output from individual system predictions (Mohri et al., 2002; Kumar et al., 2006; Beck et al., 2019), (2) the ability to incorporate external models to re-score each individual system (Och and Ney, 2002; Huang and Chiang, 2007), (3) the ability to easily adapt individual components towards out-of-domain data (Koehn and Schroeder, 2007; Peddinti et al., 2015), and finally (4) the ability to monitor performance of the individual systems towards the decomposed sub-task (Tillmann and Ney, 2003; Meyer et al., 2016).", "In this paper, we seek to incorporate these properties of cascaded systems into end-to-end sequence models.", "We first propose a generic framework to learn searchable hidden intermediates using an auto-regressive encoder-decoder model for any decomposable sequence task (3).", "We then apply this approach to speech translation, where the intermediate stage is the output of ASR, by passing continuous hidden representations of discrete transcript sequences from the ASR sub-net decoder to the MT sub-net encoder.", "By doing so, we gain the ability to use beam search with optional external model re-scoring on the hidden intermediates, while maintaining end-to-end differentiability.", "Next, we suggest mitigation strategies for the error propagation issues inherited from decomposition.", "We show the efficacy of searchable intermediate representations in our proposed model, called the Multi-Decoder, on speech translation with a 5.4 and 2.8 BLEU score improvement over the previous state-of-the-arts for Fisher and CallHome test sets respectively (6).", "We extend these improvements by an average of 0.5 BLEU score through the aforementioned benefit of re-scoring the intermediate search with external models trained on the same dataset.", "We also show a method for monitoring sub-net performance using oracle intermediates that are void of search errors (6.1).", "Finally, we show how these models can adapt to out-of-domain speech translation datasets, how our approach can be generalized to other sequence tasks like speech recognition, and how the benefits of decomposition persist even for larger corpora like MuST-C (6.2).", "The probabilistic space of a sequence is combinatorial in nature, such that a sentence of L words from a fixed vocabulary V would have an output space S of size |V| L .", "In order to deal with this combinatorial output space, an output sentence is decomposed into labeled target tokens, y = ( y 1 , y 2 , . . . , y L ) , where y l V .", "sequence-to-sequence tasks to learn next word prediction, which outputs a distribution over the next target token y l given the previous tokens y 1: l (cid:57) 1 and the input sequence x = ( x 1 , x t , . . . , x T ) , where T is the input sequence length.", "In the next sub-section we detail the training and inference of these models.", "Training: In an auto-regressive encoder-decoder model, the ENCODER maps the input sequence x to a sequence of continuous hidden representations h E = ( h E 1 , h Et , . . . , h ET ) , where h Et R d .", "The DECODER then auto-regressively maps h E and the preceding ground-truth output tokens, y 1: l (cid:57) 1 , to h Dl , where h Dl R d .", "The sequence of decoder hidden representations form h D = ( h D 1 , h Dl , . . . , h DL ) and the likelihood of each output token y l is given by SOFTMAXOUT , which denotes an affine projection of h Dl to V followed by a softmax function.", "h E = ENCODER ( x ) h Dl = DECODER ( h E , y 1: l (cid:57) 1 ) (1) P ( y l | y 1: l (cid:57) 1 , h E ) = SOFTMAXOUT ( h Dl ) (2) During training, the DECODER performs token classification for next word prediction by considering only the ground truth sequences for previous tokens y .", "We refer to this h D as oracle decoder representations, which will be discussed later.", "Inference: During inference, we can maximize the likelihood of the entire sequence from the output space S by composing the conditional probabilities of each step for the L tokens in the sequence.", "h Dl = DECODER ( h E , y 1: l (cid:57) 1 ) (3) P ( y l | x , y 1: l (cid:57) 1 ) = SOFTMAXOUT ( h Dl ) y = argmax y S L (cid:89) i =1 P ( y i | x , y 1: i (cid:57) 1 ) (4) This is an intractable search problem and it can be approximated by either greedily choosing argmax at each step or using a search algorithm like beam search to approximate y .", "Beam search (Reddy, 1988) generates candidates at each step and prunes the search space to a tractable beam size of B most likely sequences.", "As B , the beam search result would be equivalent to equation 4.", "In approximate search for auto-regressive models, like beam search, the DECODER receives alternate candidates of previous tokens to find candidates with a higher likelihood as an overall sequence.", "This also allows for the use of external models like Language Models (LM) or Connectionist Temporal Classification Models (CTC) for re-scoring candidates (Hori et al., 2017).", "In this section, we present a general framework to exploit natural decompositions in sequence tasks which seek to predict some output C from an input sequence A .", "If there is an intermediate sequence B for which A B sequence transduction followed by B C prediction achieves the original task, then the original A C task is decomposable.", "In other words, if we can learn P ( B | A ) then we can learn the overall task of P ( C | A ) through max B ( P ( C | A , B ) P ( B | A )) , approximated using Viterbi search.", "We define a first encoder-decoder SUB AB NET to map an input sequence A to a sequence of decoder hidden states, h DB .", "Then we define a subsequent SUB BC NET to map h DB to the final probabilistic output space of C .", "Therefore, we call h DB hidden intermediates .", "The following equations shows the two sub-networks of our framework, SUB AB NET and SUB BC NET , which can be trained end-to-end while also exploiting compositionality in sequence tasks.", "2 2 Note that this framework does not use locally-normalized softmax distributions but rather the hidden representations, thereby avoiding label bias issues when combining multiple sub-systems (Bottou et al., 1997; Wiseman and Rush, 2016).", "h E = ENCODERA ( A ) h DB l = DECODERB ( h E , y B 1: l (cid:57) 1 ) P ( y B l | y B 1: l (cid:57) 1 , h E ) = SOFTMAXOUT ( h DB l ) (5) SUB BC NET :", "Note that the final prediction, given by equation 6, does not need to be a sequence and can be a categorical class like in spoken language understanding tasks.", "Next we will show how the hidden intermediates become searchable during inference.", "As stated in section 2.2, approximate search algorithms maximize the likelihood, P ( y | x ) , of the entire sequence by considering different candidates y l at each step.", "Candidate-based search, particularly in auto-regressive encoder-decoder models, also affects the decoder hidden representation, h D , as these are directly dependent on the previous candidate (refer to equations 1 and 3).", "This implies that by searching for better approximations of the previous predicted tokens, y l (cid:57) 1 = ( y BEAM ) l (cid:57) 1 , we also improve the decoder hidden representations for the next token, h Dl = ( h DBEAM ) l .", "As y BEAM y , the decoder hidden representations tend to the oracle decoder representations that have only errors from next word prediction, h DBEAM h D .", "A perfect search is analogous to choosing the ground truth y at each step, which would yield h D .", "We apply this beam search of hidden intermediates, thereby approximating h DB with h DBBEAM .", "This process is illustrated in algorithm 1, which shows beam search for h DBBEAM that are subsequently passed to the SUB BC NET .", "3 In line 7, we show how an external model like an LM or a CTC model can be used to generate an alternate sequence likelihood, PEXT ( y B l ) , which can be combined with the SUB AB NET likelihood, PB ( y B l | x ) , with a tunable parameter .", "Algorithm 1 Beam Search for Hidden Intermediates: We perform beam search to approximate the most likely sequence for the sub-task A B , y BBEAM , while collecting the corresponding DECODERB hidden representations, h DBBEAM .", "The output h DBBEAM , is passed to the final sub-network to predict final output C and y BBEAM is used for monitoring performance on predicting B .", "1: Initialize: BEAM {sos}; k beam size; 2: h EA ENCODERA ( x ) 3: for l = 1 to max STEPS do 4: for y B l (cid:57) 1 BEAM do 5: h DB l DECODERB ( h EA , y B l (cid:57) 1 ) 6: for y B l y B l (cid:57) 1 + {V} do 7: s l P AB ( y B l | x ) 1 (cid:57) PEXT ( y B l ) 8: H ( s l , y B l , h DB l ) 9: end for 10: end for 11: BEAM arg k max( H ) 12: end for 13: ( s B , y BBEAM , h DBBEAM ) argmax( BEAM ) 14: Return y BBEAM SUB AB NET Monitoring 15: Return h DBBEAM Final SUB BC NET We can monitor the performance of the SUB AB NET by comparing the decoded intermediate sequence y BBEAM to the ground truth y B .", "We can also monitor the SUB BC NET performance by using the aforementioned oracle representations of the intermediates, h DB , which can be obtained by feeding the ground truth y B to DECODERB .", "By passing h DB to SUB BC NET , we can observe its performance in a vacuum, i.e. void of search errors in the hidden intermediates.", "In order to show the applicability of our end-to-end framework we propose our Multi-Decoder model for speech translation.", "This model predicts a sequence of text translations y ST from an input se-3 The algorithm shown only considers a single top approximation of the search; however, with added time-complexity, the final task prediction improves with the n-best h DBBEAM for selecting the best resultant C .", "quence of speech x and uses a sequence of text transcriptions y ASR as an intermediate.", "In this case, the SUB AB NET in equation 5 is specified as the ASR sub-net and the SUB BC NET in equation 6 is specified as the MT sub-net.", "Since the MT sub-net is also a sequence prediction task, both sub-nets are encoder-decoder models in our architecture (Bah-danau et al., 2015; Vaswani et al., 2017).", "In Figure 1 we illustrate the schematics of our transformer based Multi-Decoder ST model which can also be summarized as follows: h EASR = ENCODERASR ( x ) (7) h DASR l = DECODERASR ( h EASR , y ASR 1: l (cid:57) 1 ) (8) h EST = ENCODERST ( h DASR ) (9) h DST l = DECODERST ( h EST , y ST 1: l (cid:57) 1 ) (10) As we can see from Equations 9 and 10, the MT sub-network attends only to the decoder representations, h DASR , of the ASR sub-network, which could lead to the error propagation issues from the ASR sub-network to the MT sub-network similar to the cascade systems, as mentioned in 1.", "To alleviate this problem, we modify equation 10 such that DECODERST attends to both h EST and h EASR : h DSAST l = DECODERSAST ( h EST , h EASR , y ST 1: l (cid:57) 1 ) (11) We use the multi-sequence cross-attention discussed by Helcl et al. (2018), shown on the right side of Figure 1, to condition the final outputs generated by h DST l on both speech and transcript information in an attempt to allow our network to recover from intermediate mistakes during inference.", "We call this model the Multi-Decoder w/ Speech-Attention.", "For our baseline model, we use an end-to-end encoder-decoder (Enc-Dec) ST model with ASR joint training (Inaguma et al., 2020) as an auxiliarly loss to the speech encoder.", "In other words, the model consumes speech input using the ENCODERASR , to produce h EASR , which is used for cross-attention by DECODERASR and the DECODERST .", "Using the decomposed ASR task as an auxiliary loss also helps the baseline Enc-Dec model and provide strong baseline performance, as we will see in Section 6.", "Data: We demonstrate the efficacy of our proposed approach on ST in the Fisher-CallHome corpus", "corpus (Post et al., 2013) which contains 170 hours of Spanish conversational telephone speech, transcriptions, and English translations.", "All punctuations except apostrophes were removed and results are reported in terms of detokenized case-insensitive BLEU (Papineni et al., 2002; Post, 2018).", "We compute BLEU using the 4 references in Fisher (dev, dev2, and test) and the single reference in CallHome (dev and test) (Post et al., 2013; Kumar et al., 2014; Weiss et al., 2017).", "We use a joint source and target vocabulary of 1K byte pair encoding (BPE) units (Kudo and Richardson, 2018).", "We prepare the corpus using the ESPnet library and we follow the standard data preparation, where inputs are globally mean-variance normalized log-mel filterbank and pitch features from up-sampled 16kHz audio (Watanabe et al., 2018).", "We also apply speed perturbations of 0.9 and 1.1 and the SS SpecAugment policy (Park et al., 2019).", "Baseline Configuration: All of our models are implemented using the ESPnet library and trained on 3 NVIDIA Titan 2080Ti GPUs for 12 hours.", "For the Baseline Enc-Dec baseline, discussed in 4, we use an ENCODERASR consisting of a convolutional sub-sampling by a factor of 4 (Watan-abe et al., 2018) and 12 transformer encoder blocks with 2048 feed-forward dimension, 256 attention dimension, and 4 attention heads.", "The DECODERASR and DECODERST both consist of 6 transformer decoder blocks with the same configuration as ENCODERASR .", "There are 37.9M trainable parameters.", "We apply dropout of 0.1 for all components, detailed in the Appendix (A.1).", "We train our models using an effective batch-size of 384 utterances and use the Adam optimizer (Kingma and Ba, 2015) with inverse square root decay learning rate schedule.", "We set learning rate to 12.5, warmup steps to 25K, and epochs to 50.", "We use joint training with hybrid CTC/attention ASR (Watanabe et al., 2017) by setting mtl-alpha to 0 .", "3 and asr-weight to 0 .", "5 as defined by Watanabe et al. (2018).", "During inference, we perform beam search (Seki et al., 2019) on the ST sequences, using a beam size of 10, length penalty of 0.2, max length ratio of 0.3 (Watanabe et al., 2018).", "Multi-Decoder Configuration: For the Multi-Decoder ST model, discussed in 3, we use the same transformer configuration as the baseline for the ENCODERASR , DECODERASR , and DECODERST .", "Additionally, the Multi-Decoder has an ENCODERST consisting of 2 transformer encoder blocks with the same configuration as ENCODERASR , giving a total of 40.5M trainable parameters.", "The training configuration is also the same as for the baseline.", "For the Multi-Decoder w/ Speech-Attention model (42.1M trainable parame-ters), we increase the attention dropout of the ST decoder to 0.4 and dropout on all other components of the ST decoder to 0.2 while keeping dropout on the remaining components at 0.1.", "We verified that increasing the dropout does not help the vanilla multi-decoder ST model.", "During inference, we perform beam search on both the ASR and ST output sequences, as discussed in 3.", "The ST beam search is identical to that of the baseline.", "For the intermediate ASR beam search, we use a beam size of 16, length penalty of 0.2, max length ratio of 0.3.", "In some of our experiments, we also include fusion of a source language LM with a 0.2 weight and CTC with a 0.3 weight to re-score the intermediate ASR beam search (Watanabe et al., 2017).", "For the Speech-Attention variant, we increase LM weight to 0.4.", "Note that the ST beam search configuration remains constant across our baseline and Multi-Decoder experiments as our focus is on improving overall performance through searchable intermediate representations.", "Thus, the various re-scoring techniques applied to the ASR beam search are options newly enabled by our proposed architecture and are not used in the ST beam search.", "Table 1 presents the overall ST performance (BLEU) of our proposed Multi-Decoder model.", "Our model improves by +2.9/+0.3 (Fisher/CallHome) over the best cascaded baseline and by +5.6/+1.5 BLEU over the best published end-to-end baselines.", "With Speech-Attention, our model improves by +3.4/+1.6 BLEU over the cascaded baselines and +7.1/+2.8 BLEU over encoder-decoder baselines.", "Both the Multi-Decoder and Multi-Decoder w/ Speech-Attention on average are further improved by +0.9/+0.4 BLEU through ASR re-scoring.", "4 Table 1 also includes our implementation of the Baseline Enc-Dec model discussed in 4.", "In this way, we are able to make a fair comparison with our framework as we control the model and inference 4 We also evaluate our models using other MT metrics to supplement these results, as shown in the Appendix (A.2).", "configurations to be analagous.", "For instance, we keep the same search parameters for the final output in the baseline and the Multi-Decoder to demonstrate impact of the intermediate beam search.", "An added benefit of our proposed approach over the Baseline Enc-Dec is the ability to monitor the individual performances of the ASR (% WER) and MT (BLEU) sub-nets as shown in Table 2.", "The Multi-Decoder w/ Speech-Attention shows a greater MT sub-net performance than the Multi-Decoder as well as a slight improvement of the ASR sub-net, suggesting that ST can potentially help ASR.", "The overall ST performance improves when a higher beam size is used in the intermediate ASR search, and this increase can be attributed to the improved ASR sub-net performance.", "Figure 1 shows this trend across ASR beam sizes of 1, 4, 8, 10, 16 while fixing the ST decoding beam size to 10.", "A 1 4 8 10 16 51 .", "beam size of 1, which is a greedy search, results in lower ASR sub-net and overall ST performances.", "As beam sizes become larger, gains taper off as can be seen between beam sizes of 10 and 16.", "External models like CTC acoustic models and language models are commonly used for re-scoring encoder-decoder models (Hori et al., 2017), due to the difference in their modeling capabilities.", "CTC directly models transcripts while being conditionally independent on the other outputs given the input, and LMs predict the next token in a sequence.", "Both variants of the Multi-Decoder improve due to improved ASR sub-net performance using exter-Overall Sub-Net Model ST( ) ASR( ) Multi-Decoder 52.7 22.6 +ASR Re-scoring w/ LM 53.2 22.6 +ASR Re-scoring w/ CTC 52.8 22.1 +ASR Re-scoring w/ LM 53.3 21.7 Multi-Decoder w/ Speech-Attn.", "nal CTC and LM models for re-scoring, as shown in Table 3.", "We use a recurrent neural network LM trained on the Fisher-CallHome Spanish transcripts with a dev perplexity of 18.8 and the CTC model from joint loss applied during training.", "Neither external model incorporates additional data.", "Although the impact of the LM-only re-scoring is not shown in the ASR % WER, it reduces substitution and deletion rates in the ASR and this is observed to help the overall ST performance.", "As discussed in 3, our Multi-Decoder model inherits the error propagation issue as can be seen in Figure 3.", "For the easiest bucket of utterances with < 40% WER in Multi-Decoder's ASR subnet, our model's ST performance, as measured by the corpus BLEU of the bucket, exceeds that of the Baseline Enc-Dec.", "The inverse is true for the more difficult bucket of [40 , 80)% , showing that error propagation is limiting the performance of our model; however, we show that multi-sequence attention can alleviate this issue.", "For extremely difficult utterances in the 80% bucket, ST performance for all three approaches is suppressed.", "We also provide qualitative examples of error propagation avoidance in the Appendix (A.3).", "In this section, we discuss the generalizability of our framework towards out-of-domain data.", "We also extend our Multi-Decoder model to other sequence tasks like speech recognition.", "Finally, we apply our ST models to a larger corpus with more language pairs and a different domain of speech.", "Like cascaded systems, searchable intermediates provide our model adaptability in individual subsystems towards out-of-domain data using external in-domain language model, thereby giving access to more in-domain data.", "Specifically for speech translation systems, this means we can use in-domain language models in both source and target languages.", "We test the robustness of our Multi-Decoder model trained on Fisher-CallHome conversational speech dataset on read speech CoVost-2 dataset (Wang et al., 2020b).", "In Table 4 we show that re-scoring the ASR sub-net with an in-domain LM improves ASR with around 10.0% lower WER, improving the overall ST performance by around +2.5 BLEU.", "Compared to an in-domain ST baseline (Wang et al., 2020a), our out-of-domain Multi-Decoder with in-domain ASR re-scoring demonstrates the robustness of our approach.", "We apply our generic framework to another decomposable sequence task, speech recognition, and show the results of various levels of decomposition in Table 5.", "We show that with phoneme, character, or byte-pair encoding (BPE) sequences as intermediates, the Multi-Decoder presents strong results on both Fisher and CallHome test sets.", "We also observe that the BPE intermediates perform bet-Overall Sub-Net Model ST( ) ASR( ) IN-DOMAINST MODEL Baseline (Wang et al., 2020b) 12.0 -+ASR Pretrain (Wang et al., 2020b) 23.0 16.0 OUT-OF-DOMAINST MODEL Multi-Decoder 11.8 46.8 +ASR Re-scoring w/ in-domain LM 14.4 36.7 Multi-Decoder w/ Speech-Attention 12.6 46.5 +ASR Re-scoring w/ in-domain LM 15.0 36.7 Table 4: Results presenting the overall ST performance (BLEU) and the sub-net ASR (% WER) of our Multi-Decoder models when tested on out-of-domain data.", "In addition to our results using the 170 hours of the Spanish-English Fisher-CallHome corpus, in Table 6 we show that our decompositional framework is also effective on larger ST corpora.", "In particular, we use 400 hours of English-German and 500 hours of English-French ST from the MuST-C corpus (Di Gangi et al., 2019).", "Our Multi-Decoder model improves by +2.7 and +1.5 BLEU, in German and French respectively, over end-to-end baselines from prior works that do not use additional training data.", "We show that ASR re-scoring gives an additional +0.1 and +0.4 BLEU improvement.", "5 By extending our Multi-Decoder models to this MuST-C study, we show the generalizability of our 5 Details of the MuST-C data preparation and model parameters are detailed in Appendix (A.4).", "approach across several dimensions of ST tasks.", "First, our approach consistently improves over baselines across multiple language-pairs.", "Second, our approach is robust to the distinct domains of telephone conversations from Fisher-CallHome and the TED-Talks from MuST-C.", "Finally, by scaling from 170 hours of Fisher-CallHome data to 500 hours of MuST-C data, we show that the benefits of decomposing sequence tasks with searchable hidden intermediates persist even with more data.", "Furthermore, the performance of our Multi-Decoder models trained with only English-German or English-French ST data from MuST-C is comparable to other methods which incorporate larger external ASR and MT data in various ways.", "For instance, Zheng et al. (2021) use 4700 hours of ASR data and 2M sentences of MT data for pretraining and multi-task learning.", "Similarly, Bahar et al. (2021) use 2300 hours of ASR data and 27M sentences of MT data for pretraining.", "Our competitive performance without the use of any additional data highlights the data-efficient nature of our proposed end-to-end framework as opposed to the baseline encoder-decoder model, as pointed out by Sperber and Paulik (2020).", "Compositionality: A number of recent works have constructed composable neural network modules for tasks such as visual question answering (Andreas et al., 2016), neural MT (Raunak et al., 2019), and synthetic sequence-to-sequence tasks (Lake, 2019).", "Modules that are first trained separately can subsequently be tightly integrated into a single end-to-end trainable model by passing differentiable soft decisions instead of discrete decisions in the intermediate stage (Bahar et al., 2021).", "Further, even a single encoder-decoder model can be decomposed into modular components where the encoder and decoder modules have explicit functions (Dalmia et al., 2019).", "Joint Training with Sub-Tasks: End-to-end sequence models been shown to benefit from introducing joint training with sub-tasks as auxiliary loss functions for a variety of tasks like ASR (Kim et al., 2017), ST (Salesky et al., 2019; Liu et al., 2020a; Dong et al., 2020; Le et al., 2020), SLU (Haghani et al., 2018).", "They have been shown to induce structure (Belinkov et al., 2020) and improve the model performance (Toshniwal et al., 2017), but this joint training may reduce data efficiency if some sub-nets are not included in the final end-to-end model (Sperber et al., 2019; Wang et al., 2020c).", "Our framework avoids this sub-net waste at the cost of computational load during inference.", "Speech Translation Decoders: Prior works have used ASR/MT decoding to improve the overall ST decoding through synchronous decoding (Liu et al., 2020a), dual decoding (Le et al., 2020), and successive decoding (Dong et al., 2020).", "These works partially or fully decode ASR transcripts and use discrete intermediates to assist MT decoding.", "Tu et al. (2017) and Anastasopoulos and Chiang (2018) are closest to our multi-decoder ST model, however the benefits of our proposed framework are not entirely explored in these works.", "Two-Pass Decoding: Two-pass decoding involves first predicting with one decoder and then re-evaluating with another decoder (Geng et al., 2018; Sainath et al., 2019; Hu et al., 2020; Rijh-wani et al., 2020).", "The two decoders iterate on the same sequence, so there is no decomposition into sub-tasks in this method.", "On the other hand, our approach provides the subsequent decoder with a more structured representation than the input by decomposing the complexity of the overall task.", "Like two-pass decoding, our approach provides a sense of the future to the second decoder which allows it to correct mistakes from the previous first decoder.", "Auto-Regressive Decoding: As auto-regressive decoders inherently learn a language model along with the task at hand, they tend to be domain spe-cific (Samarakoon et al., 2018; Mller et al., 2020).", "This can cause generalizability issues during inference (Murray and Chiang, 2018; Yang et al., 2018), impacting the performance of both the task at hand and any downstream tasks.", "Our approach alleviates these problems through intermediate search, external models for intermediate re-scoring, and multi-sequence attention.", "We present searchable hidden intermediates for end-to-end models of decomposable sequence tasks.", "We show the efficacy of our Multi-Decoder model on the Fisher-CallHome Es En and MuST-C En De and En Fr speech translation corpora, achieving state-of-the-art results.", "We present various benefits in our framework, including sub-net performance monitoring, beam search for better hidden intermediates, external models for better search, and error propagation avoidance.", "Further, we demonstrate the flexibility of our framework towards out-of-domain tasks with the ability to adapt our sequence model at intermediate stages of decomposition.", "Finally, we show generalizability by training Multi-Decoder models for the speech recognition task at various levels of decomposition.", "We hope insights derived from our study stimulate research on tighter integrations between the benefits of cascaded and end-to-end sequence models.", "Exploiting searchable intermediates through beam search is just the tip of the iceberg for search algorithms, as numerous approximate search techniques like diverse beam search (Vijayakumar et al., 2018) and best-first beam search (Meister et al., 2020) have been recently proposed to improve diversity and approximation of the most-likely sequence.", "Incorporating differentiable lattice based search (Hannun et al., 2020) can also allow the subsequent sub-net to digest n-best representations.", "This work started while Vikas Raunak was a student at CMU, he is now working as a Research Scientist at Microsoft.", "We thank Pengcheng Guo, Hirofumi Inaguma, Elizabeth Salesky, Maria Ryskina, Marta Mndez Simn and Vijay Viswanathan for their helpful discussion during the course of this project.", "We also thank the anonymous reviewers for their valuable feedback.", "This work used the Extreme Science and Engineering Discovery Environment (XSEDE) (Towns et al., 2014), which is supported by National Science Foundation grant number ACI-1548562.", "Specifically, it used the Bridges system (Nystrom et al., 2015), which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC).", "The work was supported in part by an AWS Machine Learning Research Award.", "This research was also supported in part the DARPA KAIROS program from the Air Force Research Laboratory under agreement number FA8750-19-2-0200.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation there on.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "method", "abstain", "objective", "objective", "result", "result", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "objective", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "Pre-trained models for programming languages have recently demonstrated great success on code intelligence.", "To support both code-related understanding and generation tasks, recent works attempt to pre-train unified encoder-decoder models.", "However, such encoder-decoder framework is sub-optimal for auto-regressive tasks, especially code completion that requires a decoder-only manner for efficient inference.", "In this paper, we present UniXcoder, a unified cross-modal pre-trained model for programming language.", "The model utilizes mask attention matrices with prefix adapters to control the behavior of the model and leverages cross-modal contents like AST and code comment to enhance code representation.", "To encode AST that is represented as a tree in parallel, we propose a one-to-one mapping method to transform AST in a sequence structure that retains all structural information from the tree.", "Furthermore, we propose to utilize multi-modal contents to learn representation of code fragment with contrastive learning, and then align representations among programming languages using a cross-modal generation task.", "We evaluate UniXcoder on five code-related tasks over nine datasets.", "To further evaluate the performance of code fragment representation, we also construct a dataset for a new task, called zero-shot code-to-code search.", "Results show that our model achieves state-of-the-art performance on most tasks and analysis reveals that comment and AST can both enhance UniXcoder.", "Pre-trained models such as GPT (Radford et al.) and BERT (Devlin et al., 2018) have substantially advanced the state of the art across numerous natural language processing (NLP) tasks.", "These pre-trained models are pre-trained on large amounts Work done while this author was an intern at Microsoft Research.", "Contact: Daya Guo (guody5@mail2.sysu.edu.cn).", "of text data with self-supervised objectives, and can be fine-tuned to adapt to downstream tasks.", "Inspired by the success of pre-trained models in NLP, pre-trained models for programming languages (PL) (Kanade et al., 2019; Feng et al., 2020; Svyatkovskiy et al., 2020) have been proposed to promote the development of code intelligence.", "Svyatkovskiy et al. (2020) proposes GPT-C that employs a left-to-right Transformer (Vaswani et al., 2017) to support generation tasks such as code completion, but the unidirectional framework is sub-optimal for understanding tasks.", "In contrast, other works (Kanade et al., 2019; Feng et al., 2020) pre-train a bidirectional Transformer encoder on source code, which significantly improves the performance of code-related understanding tasks.", "However, its bidirectionality nature requires an additional decoder when applied to generation tasks, where this decoder initializes from scratch and cannot benefit from the pre-training.", "In this work, we present UniXcoder, a unified cross-modal pre-trained model for programming languages to support both code-related understanding and generation tasks.", "UniXcoder is based on a multi-layer Transformer and follows Dong et al. (2019) to utilize mask attention matrices with prefix adapters to control the access to context for each token.", "Compared with current unified encoder-decoder models (Ahmad et al., 2021; Wang et al., 2021) on code intelligence, UniXcoder can be better applied to auto-regressive tasks such as code completion that requires a decoder-only manner to perform efficient inference in practice.", "Instead of taking code as the only input, we also consider multi-modal contents like code comment and abstract syntax tree (AST) to enhance code representation.", "Generally, user-written code comments provide crucial semantic information about source code like Sort a given list and AST contains rich syntax information like types of statements and nested relationship among them, which helps the 7212 model better understand source code.", "To encode AST that is represented as a tree in parallel, we propose a one-to-one mapping method to transform AST in a sequence structure that retains all information of the tree and then the sequence can be used as the input to enhance code representation.", "We pre-train UniXcoder using three types of language modeling tasks: masked language modeling (Devlin et al., 2018), unidirectional language modeling (Radford et al.) and denoising objective (Raffel et al., 2019), which can enable the model to support various types of downstream tasks.", "Furthermore, we introduce two pre-training tasks to learn a embedding that can represent semantics of a code fragment.", "One is multi-modal contrastive learning that leverages AST to enhance semantics of code fragment embeddings, and the other is cross-modal generation that utilizes code comment to align embeddings among programming languages.", "We evaluate UniXcoder on five tasks over nine public datasets, including two understanding tasks: clone detection and code search, two generation tasks: code summarization and code generation, and an auto-regressive task: code completion.", "To further test code fragment embeddings, we propose a new task, called zero-shot code-to-code search, and construct a new dataset from CodeNet corpus (Puri et al., 2021) for this task.", "Experimental results show that our model achieves state-of-the-art performance on most tasks.", "Further analysis reveals that AST and code comment can both enhance UniXcoder to better capture code semantics.", "In summary, the contributions of this paper are: (1) We propose a unified cross-modal pre-trained model that leverages multi-modal contents, i.e. code comment and AST, to support code-related understanding, generation tasks and auto-regressive tasks.", "(2) We propose a one-to-one mapping function that converts AST into a sequence that retains all information of AST and can be encoded with source code and comment in parallel.", "(3) We further propose to utilize code comment to learn code fragment representation and construct a new dataset for zero-shot code-code search to evaluate the quality of code fragment representation.", "(4) Experimental results show that UniXcoder provides significant improvement on most downstream tasks.", "1 1 All the codes and data are available at https:// github.com/microsoft/CodeBERT .", "With the great success of pre-training in natural language (NL) processing (Devlin et al., 2018; Lewis et al., 2019; Raffel et al., 2019; Brown et al., 2020), pre-trained models for programming languages have been proposed to promote the development of code intelligence.", "These pre-trained models can be generally divided into three categories: encoder-only, decoder-only, and encoder-decoder models.", "Encode-only models (Kanade et al., 2019; Bu-ratti et al., 2020; Feng et al., 2020; Guo et al., 2020; Wang et al., 2022) pre-train a bidirectional Transformer in which each token can attend to each other.", "Kanade et al. (2019) pre-train CuBERT on a corpus of Python source codes by masked language modeling and next sentence prediction objectives.", "CodeBERT(Feng et al., 2020) is pre-trained on NL-PL pairs in six programming languages with a new pre-training task, namely replace token detection.", "GraphCodeBERT (Guo et al., 2020) leverages data flow to enhance code representation, while SYNCOBERT (Wang et al., 2022) incorporates abstract syntax tree by AST edge prediction and contrastive learning.", "However, encoder-only models require an additional decoder for generation tasks, where this decoder initializes from scratch and cannot benefit from the pre-training.", "As for decoder-only pre-trained models, Svyatkovskiy et al. (2020) and Lu et al. (2021) respectively propose GPT-C and CodeGPT, which are both pre-trained using unidirectional language modeling that only allows tokens to attend the previous tokens and itself to predict the next token.", "Decoder-only models are good at auto-regressive tasks like code completion, but the unidirectional framework is sub-optimal for understanding tasks.", "Some recent works explore encoder-decoder models to support both understanding and generation tasks.", "PLBART (Ahmad et al., 2021) is based on the BART (Lewis et al., 2019) architecture and pre-trained on NL and PL corpus using denoising objectives.", "CodeT5 (Wang et al., 2021) adapts the T5 (Raffel et al., 2019) model that considers the crucial token type information from identifiers and allow for multi-task learning on downstream tasks.", "TreeBERT (Jiang et al., 2021) follows the encoder-decoder transformer framework but utilizes the tree structural information by modeling AST paths.", "Different from current unified models, UniXcoder is based on a multi-layer Transformer and utilizes mask attention matrices with prefix adapters 7213 # Return the sample arithmetic mean of data def mean(data):n=len(data)returnsum(data) / n Python code with a comment AST Parser module function_definition def mean parameters : block ( data ) expression_statement assignment call argument_list n = len ( data ) return_statement binary_operator return call argument_list ( data ) sum / n Non-terminal symbols (nodes) Terminal symbols (leaves) Parent-to-childrelation Figure 1: A Python code with its comment and AST.", "to control the behavior of the model for supporting both understanding and generation tasks.", "Compared with the encoder-decoder architecture, UniXcoder can be better applied to auto-regressive tasks like code completion that is widely used in IDEs, since the task requires a decoder-only manner to perform efficient inference in practice.", "Liu et al. (2020) also pre-train a similar model CugLM with multi-task learning, but they only focus on code completion rather than various tasks.", "Besides, we incorporate syntax information from AST by a one-to-one mapping function that converts an AST into a sequence to enhance code representation.", "Different from previous pre-trained models that utilize AST, the mapping function retains all structural information from AST and does not require additional pre-training tasks (such as edge prediction) to implicitly learn the AST structure.", "In this section, we describe UniXcoder, a unified cross-modal pre-trained model that leverages multimodal data (i.e. code comment and AST) to pretrain code representation.", "The model is based on Transformer and utilizes mask attention matrices (Dong et al., 2019) with prefix adapters to control the behavior of the model.", "In the following, we first introduce how to unify multi-modal data as the input of UniXcoder (3.1), and then the model architecture (3.2) and pre-training tasks (3.3).", "We give an example of a python code with its comment and AST in Figure 1. From the figure, we", "can see that the comment Return the sample arithmetic mean of data highly describes the function of the source code, which provides crucial semantic information about the source code.", "Besides, AST provides rich syntax information, for example, the subtree parameters (data) indicates the type (i.e., parameters ) of the term (data) in the function definition.", "Both of them can be used as additional knowledge to enhance code representation in pre-trained models.", "However, AST is usually expressed as a tree and cannot be used directly as input to Transformer.", "In order to encode AST in parallel with code comments, we propose a one-to-one mapping function F , described in Algorithm 1, to transform an AST into a sequence that retains all structural information.", "Specially, given a root node root of AST, the algorithm recursively applies the same function F to its children and then add its name with two special suffixes (i.e. left and right , respectively) on both sides (line 6-11 of Algorithm 1).", "If the root node is a leaf, we directly produce its name (line 4-5).", "Taking parameters (data) as an example, the mapping function F transforms the subtree to <parameters,left> ( data ) <parameters,right> .", "There can be various ways to transform a tree to a sequence of tokens, e.g. pre-order traversal.", "However, a particular transformation should be a one-to-one mapping function.", "Otherwise, the mapping may confuse a tree with another structure.", "Our mapping function F satisfies this requirement (see Appendix A for a proof).", "Finally, given a source code C , we take its comment W = { w 0 , w 1 , ..., w m 1 } and the flattened AST 7214 token sequence F ( T ( C )) = { c 0 , c 1 , ..., c k 1 } as input, where T ( C ) is the root of the AST of the code.", "For input format, we concatenate them with a prefix as an input sequence, as shown at the bottom of Figure 2, where the prefix represents the work mode of the model and will be discussed next.", "Figure 2 shows the model architecture of UniXcoder.", "The model applies N transformer layers over code comment and flattened AST with a prefix to produce hidden states HN = { h N 0 , h N 1 , ..., h Nn 1 } , where the prefix p { [ Enc ] , [ Dec ] , [ E 2 D ] } indicates the behavior of the model, e.g. [ E 2 D ] means that UniXcoder works as a encoder-decoder model.", "Each transformer layer contains an architecturally identical transformer that uses a multi-headed self-attention operation (Vaswani et al., 2017) followed by a feed forward layer over the output of the previous layer.", "For the l -th transformer layer, the output of the multi-headed self-attention is computed via: Q = H l 1 WQ , K = H l 1 WK , V = H l 1 WV (1) head = softmax(QK T d k + M)V (2) where previous layer's output H l 1 R n d h is linearly mapped to a triplet of queries, keys and values respectively.", "d k is the dimension of a head, and M R n n is a mask matrix to control the context a token can attend to when computing its contextual representation, as shown in the middle of Figure 2. If the i -th token is allowed to attend to the j -th token, then M ij is set to 0 otherwise .", "For encoder-only mode, we add a special token [ Enc ] as the prefix in front of the input and set all elements of the mask matrix as 0 to allow all tokens attend to each other.", "For decoder-only mode, a prefix [ Dec ] is used and the upper triangular part of the mask is set to to indicate that each token can only attend to itself and previous tokens.", "For encoder-decoder mode, tokens in the source input are allowed to attend to each other, while tokens in the target input only attend to itself and previous tokens in both source and target inputs.", "We use the [ E 2 D ] prefix to indicate that UniXcoder works as an encoder-decoder model.", "During the pre-training phase, model parameters are shared in different modes and optimized with several objectives to support various types of downstream tasks.", "We describe the pre-training tasks used in UniXcoder in this section.", "As shown on the right side of Figure 2, we first pre-train UniXcoder using three tasks, including masked language modeling (De-vlin et al., 2018), unidirectional language modeling (Radford et al.) and denoising objective (Raffel et al., 2019).", "These tasks are designed for different modes, enabling UniXcoder to support various types of code-related downstream tasks.", "We then propose to utilize multi-modal data to learn code fragment embeddings through contrastive learning with cross-modal generation, as shown in Figure 3. Masked Language Modeling For encoder-only mode, we follow Devlin et al. (2018) to apply masked language modeling (MLM) pre-training task.", "Specially, we sample 15% of the tokens S m from the input sequence, and then replace 80% (10%) of them with a [MASK] (random) token and leave another 10% of them unchanged.", "The task is to predict original tokens of masked tokens based on their bidirectional contextual tokens, as illustrated in Figure 2", "(a).", "In particular, the model can leverage semantic information from comment and syntax information from AST to infer masked code tokens, which encourages the model to learn code representations from different knowledge resources.", "The objective is calculated as Equation 3, where X mask is the masked input sequence.", "Unidirectional Language Modeling We use unidirectional language modeling (ULM) pretraining task to pre-train decoder-only mode for supporting auto-regressive tasks like code completion, as shown in Figure 2", "(b).", "The task predicts the next token x i one by one conditioned on previous tokens and itself { x 0 , x 1 ,", ".., x i 1 } , which can be done using a triangular matrix for attention mask.", "Denoising Objective D e N oi S ing (DNS) pretraining objective has been shown to be quite effective for encoder-decoder models like BART (Lewis et al., 2019) and T5 (Raffel et al., 2019) in NLP.", "The task randomly masks spans with arbitrary lengths and then generates these masked spans in encoder-decoder mode.", "To better support generation tasks 7215 Encoder-only Decoder-only Encoder-Decoder Prevent from Attending Self-attention Masks Transformer Layer 1 Transformer Layer 2 Transformer Layer N Position Embeddings Token Embeddings Prefix Comment Flattened AST !\" #\" $%&\" $%#\" UniXcoder with Shared Parameters Pre-training Tasks", "like code summarization, we utilize similar denoising objective as T5 for encoder-decoder mode, as illustrated in Figure 2", "(c).", "Specially, we first split the input sequence into max ( (cid:98) n r l (cid:99) , 1) chunks and then randomly mask a span of from 1 to 2 l 1 tokens for each chunk, where n is the length of the input, r is corruption rate and l is the average length of masked spans.", "We set corruption rate as 15% and the average length as 5, respectively.", "The concatenation { y 0 , y 1 , ..., y n 1 } of all masked spans with special tokens [ MASK k ] in front of the k -th span will be used as the output: loss DNS = n 1 (cid:88) i =0 logp ( y i | X mask , y t<i ) (5) Code Fragment Representation Learning In addition to the above three pre-training tasks designed for different modes, we propose to utilize multi-modal data to learn semantic embedding (cid:101) h i of a code fragment C i .", "As shown in Figure 3, we first use UniXcoder to encode a mapped AST sequence and then apply a mean pooling layer over the hidden states of the source input to obtain semantic embedding (cid:101) h i .", "In order to learn the semantic embedding, we propose two pre-training tasks.", "One is multi-modal contrastive learning (MCL), and another is cross-modal generation (CMG).", "For multi-modal contrastive learning, we follow Gao et al. (2021) to forward the same input using different hidden dropout mask as a positive example (cid:101) h + i and use other representations in the same batch as negative examples.", "The loss is calculated as Equation 6, where b is batch size, is a temperature hyperparameter, and cos ( , ) is the cosine similarity between two vectors.", "For cross-modal generation, we ask the model to generate its comment W = { w 0 , w 1 , ..., w m 1 } .", "The comment describes the function of the code, which can help the model not only understand the code semantics but align representations among different programming languages by a unified natural language description as a fulcrum.", "Since the generation of the comment is conditioned on the 7216 code, it will force the model to fuse semantic information from the comment into the hidden states of the code.", "The loss is calculated as Equation 7, where X is the flattened AST token sequence.", "In order to learn the semantic embedding of natural language, we randomly exchange the source input and the target input with a probability of 50%.", "Considering that explicitly adding AST in downstream tasks will introduce extra costs like parsing time and increasing input length (70% longer input length after tokenization), we implicitly learn knowledge from AST by pre-training and only keep leaves of AST (i.e. source code) in the fine-tuning phase.", "This gap can be alleviated by randomly drop all non-terminal symbols of AST with a probability of 50% in the pre-training phase.", "More details about pre-training dataset and settings can be found in the Appendix B. 4 Experiments We evaluate UniXcoder on five tasks over nine public datasets, including two understanding tasks (4.2), two generation tasks (4.3) and an autoregressive task (4.4).", "To further evaluate the performance of code fragment embeddings, we also propose a new task called zero-shot code-to-code search (4.5).", "More details about datasets and fine-tuning can be found in the Appendix C. 4.1 Baselines We compare UniXcoder with state-of-the-art pre-trained models, including encoder-only , decoder-only and encoder-decoder models.", "For encoder-only models, we consider Roberta (Liu et al., 2019) pre-trained on text corpus with MLM, CodeBERT (Feng et al., 2020) pre-trained on NL-PL pairs using both MLM and replaced token detection, GraphCodeBERT (Guo et al., 2020) that leverages data flow to enhance code representation, and SYNCOBERT that incorporates AST by edge prediction and contrastive learning.", "For decoder-only models, we consider GPT-2 (Radford et al., 2019) and CodeGPT (Lu et al., 2021), where the former one is pre-trained on text corpus and the latter one is pre-trained on CodeSearchNet dataset.", "Both use ULM as the objective.", "For encoder-decoder models, we mainly compare the current unified models PLBART (Ahmad et al., 2021) and CodeT5 (Wang et al., 2021).", "PLBART is based on BART and pre-trained on 470M Python and 210M Java functions, and 47M NL posts from StackOverflow using denoising objective.", "CodeT5, adapted from T5, considers the crucial token type information from identifiers and allows multi-task learning on downstream tasks.", "Clone Detection The task is to measure the similarity between two code fragments.", "We conduct experiments on POJ-104 (Mou et al., 2016) and BigCloneBench (Svajlenko et al., 2014) datasets.", "The first dataset is to predict whether two codes have the same semantics and uses F1-score as the evaluation metric, while the second aims to retrieve semantically similar codes given a code as the query with the Mean Average Precision (MAP) as the metric.", "Code Search The task aims to find the most relevant code from a collection of candidates given a natural language query.", "We conduct experiments on three datasets, namely CSN (Guo et al., 2020), AdvTest (Lu et al., 2021) and CosQA (Huang et al., 2021).", "CSN dataset is constructed from CodeSearchNet dataset of six programming languages, and low-quality queries are filtered by handcrafted rules.", "AdvTest normalizes python function and variable names to better test the understanding and generalization capabilities of models.", "The code base of CosQA is also from CodeSearchNet corpus but queries come from the search logs of Microsoft Bing search engine.", "We use Mean Reciprocal Rank (MRR) evaluation metric for the task.", "Results The results are shown in Table 1. Compared with encoder-only pre-trained models (i.e. the first group) and encoder-decoder models (i.e. the second group), UniXcoder outperforms them and achieves state-of-the-art performance on two tasks on all five datasets.", "By comparing with the results of ablation studies in the last six rows, we can see that the improvement mainly comes from contrastive learning and the use of multi-modality.", "Code Summarization The task aims to generate an NL summary of a code snippet.", "We use the dataset provided by the CodeXGLUE team (Lu et al., 2021) for this task.", "We use the smoothed BLEU-4 (Lin and Och, 2004) as the evaluation metric and report overall score of six PLs, including Ruby, JavaScript, Go, Python, Java, and PHP.", "Code Generation The task is to generate a code snippet based on an NL description.", "we use CONCODE (Iyer et al., 2018) dataset, where the input consists of an NL description and code environments.", "For this task, we use exact match (EM) and BLEU-4 as evaluation metrics.", "Results From Table 2, UniXcoder achieves comparable performance on generation tasks compared with CodeT5-base and brings a 0.3% improvement in code generation accuracy.", "However, UniXcoder has slightly worse BLEU-4 scores on both code summarization and generation tasks.", "The main reasons may come from two aspects.", "One is the amount of NL-PL pairs in the pre-training data.", "As shown in the ablation study (see w/o comment ) in the table, NL-PL pairs bring significant improvement on two tasks.", "Wang et al. (2021) collect 50% more NL-PL pairs from Github to pre-train CodeT5.", "Since the collected data is not public, we cannot use it to pre-train UniXcoder for fair comparison.", "Anothor reason is the model size.", "CodeT5-base uses a 12-layer encoder and a 12-layer decoder, which is twice larger than other baselines and UniXcoder.", "Therefore, we also list the results of CodeT5-small using a 6-layer encoder and a 6-layer decoder.", "We can see that UniXcoder outperforms CodeT5-small.", "We use PY150 (Raychev et al., 2016) and Github Java Corpus (Allamanis and Sutton, 2013) datasets in CodeXGLUE (Lu et al., 2021) for line-level code completion tasks.", "The task entails the completion of a whole-line of code, and is evaluated using exact match accuracy and Levenshtein edit similarity (Svyatkovskiy et al., 2020).", "In practice, the task requires a decoder-only manner to perform efficient inference.", "Therefore, we first compare our UniXcoder with decoder-only models (the first group) in Table 3. As we can see, UniXcoder achieves comparable performance on 7218 Model Ruby Python Java Overall Ruby Python Java Ruby Python Java Ruby Python Java CodeBERT 13.55 3.18 0.71 3.12 14.39 0.96 0.55 0.42 7.62 4.94 GraphCodeBERT 17.01 9.29 6.38 5.01 19.34 6.92 1.77 3.50 13.31 9.17 PLBART 18.60 10.76 1.90 8.27 19.55 1.98 1.47 1.27 10.41 8.25 CodeT5-base 18.22 10.02 1.81 8.74 17.83 1.58 1.13 0.81 10.18 7.81 UniXcoder 29.05 26.36 15.16 23.96 30.15 15.07 13.61 14.53 16.12 20.45 -w/o contras 24.03 17.35 7.12 15.80 22.52 7.31 7.55 7.98 13.92 13.73 -w/o cross-gen 28.73 24.16 12.92 21.52 26.66 12.60 11.14 10.82 13.75 18.03 -w/o comment 22.24 15.90 7.50 15.09 19.88 6.54 7.84 7.12 13.20 12.81 -w/o AST 27.54 23.37 10.17 21.75 27.75 9.94 9.79 9.21 14.06 17.06 -using BFS 26.67 23.69 13.56 21.31 27.28 13.63 11.90 12.55 14.92 18.39 -using DFS 27.13 22.65 11.62 20.21 25.92 11.85 9.59 10.19 13.30 16.94 Table 4: MAP score (%) of zero-shot setting on code-to-code search task.", "both datasets and brings absolute 2.3% gain of accuracy on java corpus, which demonstrates the effectiveness of our model for code completion.", "Besides, we also compare with current unified models (the second group).", "Since they are based the encoder-decoder framework, we fine-tune their decoders by feeding a placeholder into the encoder.", "Results show that UniXcoder outperforms PLBART and CodeT5, which demonstrates our model framework is better applied to code completion tasks.", "To further evaluate the performance of code fragment embeddings, we also propose a new task called zero-shot code-to-code search.", "Given a source code as the query, the task aims to retrieve codes with the same semantics from a collection of candidates in zero-shot setting.", "The task can help users translate from one PL to another by retrieving source codes with the same semantics.", "We collect 11,744/15,594/23,530 functions from the CodeNet corpus (Puri et al., 2021) in Ruby/Python/Java PL.", "Each function solves one of 4,053 problems.", "We take each function as a query and retrieve all functions that solve the same problem from each PL.", "We use average MAP score as the evaluation metric.", "More details about the dataset and an example can be found in Appendix C.6.", "We re-implement the publicly released pre-trained models on this task using the mean vector or CLS vector of last hidden states and report the results in Table 4. The first row is the query PL and the second row is the target PL.", "From the table, we can see that UniXcoder achieves state-of-the-art performance and about 11 points improvement on the overall score compared with GraphCodeBERT.", "Ablation studies further show that both multi-modal data and code fragment representation pre-training tasks can enhance UniXcoder.", "The Effect of Representation Pre-training We conduct ablation study to analyze the effect of code fragment representation pre-training tasks by removing contrastive learning task ( w/o constras ) and cross-modal generation task ( w/o cross-gen ).", "As we can see in Table 1 and 4, two pre-training tasks significantly improve understanding tasks.", "Taking zero-shot code-code search task as an example, after removing contrastive learning, the performance drops from 20.45% to 13.73%.", "Besides, the two pre-training tasks also bring a small improvement on generation tasks, as shown in Table 2 and 3. Overall, the ablation study demonstrates the effectiveness of the two pre-training tasks.", "The Effect of Multi-modal Data We also study the effect of multi-modal data.", "By removing comment ( w/o comment ), the results from Tables indicate that code comment plays an important role in both understanding and generation tasks.", "For AST ( w/o AST ), we observe that injecting AST can boost the performance on all code understanding tasks.", "However, AST does not bring improvements on generation tasks, which may require a better way to incorporate AST for generation tasks.", "Overall, AST and comment can both improve UniXcoder.", "Comparison of Traversal Algorithms We compare our mapping function with other mapping functions used to map a tree into a sequence, namely BFS and DFS algorithms.", "As we can see, after replacing our mapping function by BFS or DFS algorithms, the performance of UniXcoder drops on both understanding and generation tasks, which demonstrates the effectiveness of our mapping function.", "In particular, using BFS or DFS algorithms even hurt the performance of UniXcoder on some tasks by comparing w/o BFS (DFS) with 7219 w/o AST .", "The main reason may be that BFS and DFS algorithms are not one-to-one mapping functions and can confuse a tree with another structure.", "Case Study We also conduct a case study to intuitively demonstrate the effectiveness of UniXcoder, as shown in Figure 4. We give an example for code search task on CosQA dataset and output predictions from different models.", "The input query from the search logs of Microsoft Bing search engine is python dict rank by value .", "We know that the intent of the user is to sort a dictionary by its value in Python language.", "Although the prediction from PLBART has higher lexical overlap than UniXcoder like rank and value , the function is incorrect since the input of the ground truth should be a dictionary.", "We can see that UniXcoder retrieves a correct function whose input is a dictionary.", "Besides, although the value in the query is expressed as the statement key=lambda t: t[1] in the function definition, UniXcoder can understand the code semantics and successfully retrieves the ground truth, which demonstrates the effectiveness of UniXcoder.", "To support both code-related understanding and generation tasks, we present UniXcoder, a unified pre-trained model that incorporates semantic and syntax information from code comment and AST.", "We propose a one-to-one mapping method to transform AST to a sequence structure and two new pre-training tasks to learn code fragment representation.", "To further investigate the performance of code representation, we propose a new downstream task of zero-shot code-to-code search and create a dataset for this task.", "Experiments show that UniXcoder significantly outperforms previous works on most tasks.", "Further ablation studies also show that both AST and code comment can enhance UniXcoder and reveal the effectiveness of our proposed mapping function and pre-training tasks.", "Yanlin Wang is the corresponding author.", "Daya Guo and Jian Yin are supported by the National Natural Science Foundation of China (U1811264, U1811262, U1811261, U1911203, U2001211), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key-Area Research and Development Program of Guangdong Province (2018B010107005, 2020B0101100001)." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "objective", "other", "other" ]
[ "Multimodal pre-training has propelled great advancement in vision-and-language research.", "These large-scale pre-trained models, although successful, fatefully suffer from slow inference speed due to enormous computation cost mainly from cross-modal attention in Transformer architecture.", "When applied to real-life applications, such latency and computation demand severely deter the practical use of pre-trained models.", "In this paper, we study Image-text retrieval (ITR), the most mature scenario of V+L application, which has been widely studied even prior to the emergence of recent pre-trained models.", "We propose a simple yet highly effective approach, LightningDOT that accelerates the inference time of ITR by thousands of times, without sacrificing accuracy.", "LightningDOT removes the time-consuming cross-modal attention by pre-training on three novel learning objectives, extracting feature indexes offline, and employing instant dot-product matching with further re-ranking, which significantly speeds up retrieval process.", "In fact, LightningDOT achieves new state of the art across multiple ITR benchmarks such as Flickr30k, COCO and Multi30K, outperforming existing pre-trained models that consume 1000 magnitude of computational hours.", "1 1 Introduction Image-text retrieval (ITR) has been widely studied as a staple benchmark task in both NLP and computer vision communities.", "Traditional ITR search engines typically deploy ranking-based models built upon visual-semantic embedding matching (Faghri et al., 2017; Huang et al., 2018) or deep cross-modal fusion with attention mechanism (Lee et al., 2018; Li et al., 2020a,b).", "Earliest works (Kiros et al., 2014; Faghri et al., 2017; Equal Contribution.", "1 Code and pre-training checkpoints are available at https://github.com/intersun/LightningDOT .", "Wang et al., 2018) employ separate image encoder ( e.g., CNN) and text encoder ( e.g., RNN), the embeddings from which are then measured by doc product for similarity matching (Figure", "1(a)).", "Later studies (Lee et al., 2018, 2019; Wang et al., 2019; Zhang et al., 2020) improve this paradigm by employing advanced region-level visual encoder ( e.g., Faster-RCNN) and applying cross-attention between word features and region features for multimodal fusion (Figure", "1(b)).", "With the advent of Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2019), cross-modal retrieval tasks are more recently dominated by vision-and-language (V+L) pre-trained models, such as ViLBERT (Lu et al., 2019), UNITER (Chen et al., 2020), OSCAR (Li et al., 2020b), and VILLA (Gan et al., 2020).", "Large-scale pre-trained models learned from massive corpus of image-text pairs can power heterogeneous downstream tasks that take diverse modalities as inputs ( e.g., text, image, video, audio).", "These models benefit from the self-attention mechanism in Transformer architecture, learning joint image+text embeddings through pre-training objectives such as masked language modeling (MLM) and masked region modeling (MRM) (Figure", "1(c)).", "However, the very ingredient that engenders the success of these pre-trained models, cross-modal attention between two modalities (through self-attention), also destines the inevitable latency and huge computation cost in training and deploying such massive-scale models.", "For example, UNITER (Chen et al., 2020) builds upon 12/24 Transformer layers, and trains over 10 million image+text pairs.", "The inference time of such large models with 110 million parameters is 48 seconds on average for text query from COCO dataset (Chen et al., 2015), not scalable in real-life applications serving millions of queries per second.", "To make real-time ITR possible with low latency, we ask a bold question: can we go back to the beginning, reverting to simple dot product for efficient cross-modal retrieval?", "To make this retro experiment feasible, we rely on Transformer to pre-train high-quality image and text encoders, but use efficient dot product for multimodal fusion instead of computationally heavy self-attention.", "To still facilitate effective cross-modal embedding learning, we use a special [CLS] token on both encoders, which transfers the learned embedding from the other modality (Figure", "1(d)).", "We name this new paradigm LightningDOT , for its lightening speed benefiting from dot product computation.", "By removing the time-consuming cross-attention between modalities, the model can learn visual-semantic embeddings without extensive matching between each image-text pair during inference, as used in existing pre-trained models (Chen et al., 2020; Li et al., 2020b; Lu et al., 2019).", "Further, by eliminating the dependency on real-time computation over image-text pairs, we can compute all image and text embeddings independently offline just for once, and reuse these embeddings as cached indexes for new queries on the fly (Figure 2).", "For model training, we propose three learning objectives to jointly train two Transformer blocks: Image Encoder and Language Encoder.", "Specifically, Visual-embedding fused MLM (namely VMLM ) and Semantic-embedding fused MRM (namely SMRM ) ensure cross-modal information is harnessed even without cross-modality self-attention.", "A cross-modal retrieval objective (namely CMR ) encourages the model to learn multimodal fusion through pre-training.", "To maintain competitive model performance, we further introduce a re-ranking mechanism to bring back the benefit of cross-attention methods.", "In summary, LightningDOT is designed with late fusion to learn visual-semantic embeddings.", "Experiments on popular ITR benchmarks show that LightningDOT is 600/1900 times faster than existing pre-trained models on Flickr30k/COCO, while achieving new state-of-the-art results.", "When retrieving from larger candidate pool (>120K im-ages), LightningDOT is 23,000 times faster.", "To the best of our knowledge, this is the first known effort on improving V+L model efficiency.", "V+L Pre-training Inspired by the success of Transformer-based (Vaswani et al., 2017) language model pre-training (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Raffel et al., 2020; Lan et al., 2020; Clark et al., 2020), vision-and-language pre-training (Huang et al., 2020b; Su et al., 2020; Li et al., 2020b, 2019a) has become the prevailing paradigm in learning multimodal representations, with strong results on tasks such as image-text retrieval (Kiros et al., 2014), visual question answering (Antol et al., 2015) and referring expression comprehension (Yu et al., 2016).", "Exemplary works include two-stream (Tan and Bansal, 2019; Lu et al., 2019) and single-stream models (Chen et al., 2020; Li et al., 2020a; Zhou et al., 2020).", "Multi-task learning (Lu et al., 2020) and adversarial training (Gan et al., 2020) are also explored.", "This family of pre-training methods aims for general-purpose V+L without computation cost consideration.", "To the best of our knowledge, our work is the first known effort on pre-training visual-semantic embedding that enables low-latency real-time cross-modal retrieval.", "Ours is concurrent work with CLIP (Radford et al., 2021).", "Image-Text Retrieval Early cross-modal embedding works (Kiros et al., 2014; Wang et al., 2018; Faghri et al., 2017) focus on using a two-stream model to learn a unified visual-semantic embedding, with progressive improvement on two popular benchmarks: Flickr30K (Plummer et al., 2015) and COCO (Chen et al., 2015).", "Later methods with cross-attention (Lee et al., 2018, 2019; Wang et al., 2019; Zhang et al., 2020) become more popular, with significant performance gain.", "Pre-trained V+L models also fall into this category.", "By exploiting large-scale image-text datasets, pretrained V+L models further push the performance on Flickr30K and COCO.", "Although achieving high recall, cross-attention requires excessive computation cost during inference that cannot be overlooked.", "2 In this work, inspired by dense retrieval in text retrieval domain (Guu et al., 2020; Karpukhin et al., 2020; Xiong et al., 2020; Mao et al., 2020; Lewis et al., 2020), we propose a more efficient attention-less framework.", "With pre-training, our model achieves better performance while being significantly faster than cross-modal attention methods.", "Note that the proposed approach is orthogonal to model compression techniques that reduce the number of layers/parameters (Sun et al., 2019; Jiao et al., 2020), since we do not reduce the number of parameters from the UNITER baseline.", "These two approaches can be combined to further boost the speed, which is an interesting future work direction.", "In this section, we present the proposed LightningDOT framework, which consists of two deep Transformers as image and language encoders.", "We first introduce three tasks designed to pre-train the model, then present our inference pipeline from offline feature extraction to online instant retrieval.", "f V and f L , respectively ( V , L are learnable parameters).", "Given a dataset of paired image and text { ( i, t ) } , we first extract region features v = { v 0 , v 1 , . . . , v N } ( v j R d v , N is the number of regions) for image i , along with bounding box positions of regions via a pre-trained Faster-RCNN (Ren et al., 2015; Anderson et al., 2018).", "3 The image encoder f V encodes this sequence of image regions into a d -dimensional space f V ( v ) = h = { h 0 , . . . , h N } ( h j R d ) .", "The corresponding text t is tokenized into sub-word units and projected into high-dimensional feature vectors w = { w 0 , w 1 , ..., w T } ( w j R d w , T is the number of tokens) following Devlin et al. (2019).", "4 Similarly, the text encoding process can be written as f L ( w ) = z = { z 0 , . . . , z T } ( z j R d ) .", "We regard the output [CLS] embedding h 0 as global image representation, and z 0 as global text representation.", "Following sections discuss how to jointly train these two encoders to learn strong visual-semantic embeddings, through three pre-training objectives.", "Visual-embedding Fused Masked Language Modeling (VMLM) Masked Language Modeling (MLM) pre-training is first proposed by Devlin et al. (2019), where 15% of the words are masked 5 and the model is trained to reconstruct the masked words.", "Formally, we denote w m = { w m 1 , . . . , w m M } as masked tokens, where m NM is the set of masked indices of size M , randomly sampled from a natural number N .", "w \\ m are 3 v 0 is a special [CLS] embedding.", "4 A 30k BPE (Sennrich et al., 2016) vocabulary (bert-basecased) is used to tokenize the text.", "LMLM ( t ) = log P L ( w m | w \\ m ) = 1 MM (cid:88) k =1 log P mlm ( w m k | z m k ) , (1)", "where mlm is the additional parameters introduced", "to map hidden states z to word probabilities.", "Under the V+L setting, the textual input is usually highly correlated with the image.", "To leverage this cross-modal relation, we propose visual-embedding fused MLM (VMLM), in which the paired image i is considered as additional input when training the model to reconstruct masked tokens in sentence t .", "The loss function of VMLM can be formulated as: LVMLM ( t, i ) = log P ( w m | w \\ m , i ) = 1 MM (cid:88) k =1 log P mlm ( w m k | z m k + h 0 ) , (2) where = { V , L } and the word probabilities P are conditioned on the corresponding image i via the global image representation h 0 .", "Although VMLM takes a similar mathematical form to the MLM task proposed in UNITER, they differ in two main aspects: 1) LightningDOT uses two separate encoders ( h 0 is computed by f V ); and 2) visual dependency is explicitly injected to text representations ( z m k + h 0 ), instead of implicitly learned through cross-modal attention.", "Semantic-embedding Fused Masked Region Modeling (SMRM) Recent works on V+L pretraining (Lu et al., 2019; Tan and Bansal, 2019) have shown that mask-then-reconstruct pre-training on image regions also helps image+text embedding learning.", "Similar to MLM, Masked Region Modeling (MRM) is supervised by: LMRM ( i ) = D mrm ( v m , f V ( v \\ m )) = 1 MM (cid:88) k =1 D mrm ( v m k , h m k ) , (3) where D can be any differentiable distance function.", "Among the variants of MRM, we consider Masked Region Feature Regression (MRFR) with L2 distance and Masked Region Classification with KL-Divergence (MRC-kl), due to their proven success in learning V+L representations (Chen et al., 2020).", "6 In MRFR, the L 2 distance between two feature vectors x and y is defined as: D fr ( x , y ) = (cid:88) k (cid:107) x k g fr ( y k ) (cid:107) 22 , where (cid:107) (cid:107) 2 denotes L 2 -norm, and g fr ( ) is a learnable Multi-layer Perceptron (MLP) with parameters fr .", "The KL-divergence DKL in MRC-kl measures distance between two probability distributions: D mrc ( x , y ) = (cid:88) k DKL ( c ( x k ) || g mrc ( y k )) , where mrc is the parameters of a trainable MLP that maps feature vector x k to the object class distribution c ( x k ) predicted by Faster R-CNN.", "To incorporate language information encoded in the paired text, we extend MRM to Semantic-embedding fused MRM (SMRM), where the global text representation z 0 is exploited when reconstructing masked regions.", "LSMRM ( i, t ) = D mrm ( v m , f V ( v \\ m ) , t ) = 1 MM (cid:88) k =1 D mrm ( v m k , h m k + z 0 ) .", "(4) The specific variants SMRFR and SMRC-kl can be derived using the corresponding distance function, which is omitted for simplicity.", "Note that both the cross-modal fusion introduced in Eqn.", "(2) and Eqn.", "(4) uses simple addition without introducing extra parameters from their uni-modal counterpart.", "Moreover, the extra parameters mlm and mrm is not needed at downstream inference so will not slow down the retrieval.", "Cross-modal Retrieval Objective (CMR) Beyond image or text focused reconstructive objectives, we also propose a new pre-training task, Cross-modal Retrieval (CMR), to leverage the paired information between image and text.", "With this learning objective, the model is optimized to promote high similarity score for a matched image-sentence pair ( i, t ) and vice versa.", "The similarity score between query t and image i is defined as: S ( t, i ) = (cid:104) z 0 , h 0 (cid:105) , (5) where (cid:104) , (cid:105) denotes the inner product between two vectors, and h 0 and z 0 are the output [CLS] embeddings from image encoder f V and language encoder f L , respectively.", "6 In our implementation, no textual inputs are directly concatenated with image regions due to separate encoding of image and text.", "In order to capture both image-retrieval and text-retrieval supervision signals in a single forward-backward pass, we propose a bi-directional variant of contrastive loss.", "Given any matched image-text pair ( i, t ) , we treat text t as the query, sample n 1 negative images { i 2 , i 3 , . . . , i n } , and then compute the objective function as: L ( t ) IR = log e S ( t,i 1 ) (cid:80) nk =1 e S ( t,i k ) , where t 1 := t .", "Similarly, we take image i as query ( i 1 := i ), sample n 1 negative text, and compute: L ( i ) TR = log e S ( i,t 1 ) (cid:80) nk =1 e S ( i,t k ) to optimize for text retrieval.", "Following Henderson et al. (2017); Gillick et al. (2019); Karpukhin et al. (2020), we use in-batch negatives to avoid the actual sampling of a negative image or text: given a batch of n positive image-text pairs B = { ( i 1 , t 1 ) , . . . , ( i n , t n ) } , we use all other images from within the batch as negatives ( { i j } , where j { 1 , 2 , . . . , n } and j (cid:54) = k ) for every positive pair ( i k , t k ) , and vice versa for negative text.", "The final CMR loss for batch B is: LCMR ( B ) = 1 2 n n (cid:88) k =1 L ( i k ) TR + L ( t k ) IR .", "An illustration of LCMR is presented in Figure 3. 7 Through joint pre-training with CMR, VMLM and SMRM, the visual-semantic embeddings learned from image encoder and language encoder can be readily applied to downstream tasks.", "During finetuning stage, we directly adopt CMR loss to supervise the training process.", "7 The whole similarity matrix can be computed efficiently with one batched matrix multiplication call.", "This operation can take advantage of GPU hardware with Tensor Cores for faster training.", "For simplicity, we take text-to-image retrieval as an example to introduce the real-time inference pipeline (Figure", "2(b)): ( i ) Offline image feature extraction and encoding; ( ii ) Online retrieval with text query; and ( iii ) Online re-ranking with top-retrieved images.", "Text retrieval is conducted in a symmetric manner.", "Offline Feature Extraction Image retrieval task requires the model to rank every image i in an image database I based on its similarity to a text query t .", "In LightningDOT, we first apply the image encoder f V to all images in I , and cache the resulting global image representations { h ( i ) 0 R d | i I } into an index (Johnson et al., 2019) in memory for later use.", "Note that the entire image-to-index process, including Faster-RCNN feature extraction and Transformer encoding, can all be conducted offline.", "Therefore, for every new query t at real time, the cached index can be reused for maximum inference time saving.", "Online Retrieval During inference, given a text query t , we encode it with the language encoder L , and then compute its similarity score to the embedding of every image in I (stored in memory index) via Eqn (5).", "Finally, the images will be ranked by their similarity scores, from the highest to lowest.", "In practice, people are more interested in topK retrieval, with a list of K images I t satisfying: I t := { i m k } Kk =1 , where S ( t, i m 1 ) S ( t, i m 2 ) S ( t, i m K ) and S ( t, i m K ) S ( t, i ) i ( I \\ I t ) .", "This optimization problem has been well studied, and we use FAISS (Johnson et al., 2019) to solve it in our implementation.", "It is worth noting that in order to apply fast search, the similarity function has to be decomposable .", "Therefore, we choose the simple dot product as S instead of a more complicated neural network function.", "Similarly, for text retrieval, the same architecture can be applied by simply pre-computing the embedding for all sentences and using an image as query instead.", "Re-ranking To further improve retrieval accuracy, we propose a two-stage approach by adopting an optional re-ranking model.", "In the first stage, we use LightningDOT to retrieve topM images (or texts), where M is an integer much smaller Model COCO Test (5k images) Flickr30K Test (1k images) Text Retrieval Image Retrieval Text Retrieval Image Retrieval R@1 R@5 R@10 R@1 R@5 R@10 AR R@1 R@5 R@10 R@1 R@5 R@10 AR VSE++ 41.3 69.2 81.2 30.3 59.1 72.4 58.9 52.9 80.5 87.2 39.6 70.1 79.5 68.3 SCO 42.8 72.3 83.0 33.1 62.9 75.5 61.6 55.5 82.0 89.3 41.1 70.5 81.1 69.9 GXN 42.0 -84.7 31.7 -74.6 -56.8 -89.6 41.5 -80.0 SCAN-single 46.4 77.4 87.2 34.4 63.7 75.7 64.1 67.9 89.0 94.4 43.9 74.2 82.8 75.4 R-SCAN 45.4 77.9 87.9 36.2 65.6 76.7 65.0 66.3 90.6 96.0 51.4 77.8 84.9 77.8 CAMP 50.1 82.1 89.7 39.0 68.9 80.2 68.3 68.1 89.7 95.2 51.5 77.1 85.3 77.8 CAAN 52.5 83.3 90.9 41.2 70.3 82.9 70.2 70.1 91.6 97.2 52.8 79.0 87.9 79.8 ViLBERT -----58.2 84.9 72.8 Unicoder-VL 62.3 87.1 92.8 46.7 76.0 85.3 75.0 86.2 86.3 99.0 71.5 90.9 94.9 88.1 UNITER-base 64.4 87.4 93.1 50.3 78.5 87.2 76.8 85.9 97.1 98.8 72.5 92.3 95.9 90.4 UNITER-large 65.7 88.6 93.8 52.9 79.9 88.0 78.1 86.9 98.1 99.2 75.5 94.0 96.6 91.7 OSCAR 73.5 92.2 96.0 57.5 82.8 89.8 82.0 ---LightningDOT 60.1 85.1 91.8 45.8 74.6 83.8 73.5 83.9 97.2 98.6 69.9 91.1 95.2 89.3 +UNITERbase Re-Ranker 64.6 87.6 93.5 50.3 78.7 87.5 77.0 86.5 97.5 98.9 72.6 93.1 96.1 90.8 +UNITERlarge Re-Ranker 65.7 89.0 93.7 53.0 80.1 88.0 78.2 87.2 98.3 99.0 75.6 94.0 96.5 91.8 +OSCAR Re-Ranker 74.2 92.4 96.0 57.4 82.7 89.9 82.1 ---Table 1: Evaluation results on image-to-text and text-to-image retrieval over Flickr30k and COCO test sets.", "than the database (index) size.", "Next, we apply a stronger retrieval model (usually slower due to the use of cross-attention) to re-rank the retrieved topM pairs from the first stage.", "The final M similarity scores obtained from the second stage will be used to re-compute the desired topK retrieval ( K M ) in Eqn.", "(7).", "Please refer to figure 2 for a more detailed visualization.", "Our experiments show that this two-stage approach can benefit from the best of both worlds: maintaining a constant fast speed per query 8 while achieving state-of-the-art accuracy.", "Another advantage of this pipeline is that it can readily incorporate any advanced model as the re-ranker, thus future stronger image-text retrieval models can take advantage of LightningDOT for better efficiency.", "For pre-training, we use pre-processed data provided by Chen et al. (2020), including 4.2 million", "8 The computation time of LightningDOT is negligible compared to that of UNITER.", "Therefore, the empirical speed is proportional to the number of pairs UNITER has to rank: constant M for LightningDOT + UNITER vs. the whole database (index) size for UNITER only.", "images with 9.5 million associated captions from COCO (Chen et al., 2015), VG (Krishna et al., 2017), Conceptual Captions (Sharma et al., 2018), and SBU captions (Ordonez et al., 2011).", "For evaluation, we use Flickr30k (Plummer et al., 2015) and COCO (Lin et al., 2014) datasets, which include 31K/123K images, respectively, each associated with 5 human-written captions.", "Following (Faghri et al., 2017), we split COCO into 114K/5K/5K and Flickr30K into 29K/1k/1k images for train, validation and test.", "Downstream performance is measured by recall at K (R@K) for both image and text retrieval tasks.", "We also use an additional metric AR, the average of R@K for all K across both image and sentence retrieval tasks.", "We compare the proposed approach with state-of-the-art methods (with and without pre-training) and report the results in Table 1. Without cross-attention, our method outperforms non-pre-training approaches by large margins on all metrics.", "Specifically, our model improves over CAAN (Zhang et al., 2020) (SOTA method with cross-attention) by 3.3% (73.5 vs. 70.2) on COCO and 9.5% (89.3 vs. 79.8) on Flickr30K in terms of AR.", "When compared with methods without cross-attention (VSE++ (Faghri et al., 2017) and SCO (Huang et al., 2018)), LightningDOT achieves nearly Model COCO Full (123K Images) Flickr30K Full (31K Images) Text Retrieval Image Retrieval Text Retrieval Image Retrieval R@5 R@10 R@20 R@5 R@10 R@20 AR R@5 R@10 R@20 R@5 R@10 R@20 AR LightningDOT 40.1 51.0 62.0 28.2 37.4 47.8 44.4 69.6 78.9 86.1 51.8 62.3 72.3 70.2 + Re-Ranker-base 47.9 58.5 67.8 35.7 45.2 55.2 51.7 74.2 81.7 88.2 56.9 66.7 75.6 73.9 + Re-Ranker-large 48.0 59.0 68.9 37.3 46.8 56.4 52.7 75.1 83.9 90.5 60.1 69.5 78.3 76.2 Table 2: Results on the extreme retrieval setting of full Flickr30k and full COCO datasets.", "20-point gain on AR.", "Although LightningDOT achieves slightly lower AR than UNITER (pre-training method with cross-attention), with 3.5/1.1 points drop on Flickr30K/COCO, it is 600/1900 faster than UNITER during inference time.", "We further apply second-stage re-ranking, and use UNITER to score topM retrieved image-text pairs from LightningDOT to obtain the final top-K ranked lists.", "With re-ranking, LightningDOT achieves an instant performance lift, surpassing UNITER on both benchmarks, while still 46-95 times faster than UNITER.", "With an even stronger re-ranker OSCAR, LightningDOT achieves similar results to the state-of-the-art performance on COCO.", "To demonstrate the efficiency of LightningDOT, we use UNITER-base as baseline to compare inference speed.", "We also compare with a more lightweight cross-attention method SCAN (Lee et al., 2018), which uses GRU (Chung et al., 2014) instead of a 12-layer Transformer.", "All methods are tested on a single TITAN RTX GPU, with batch size of 400.", "As shown in Table 3, SCAN is 1.9 faster than UNITER-base across both benchmarks, as the computational cost of GRU is much cheaper than that of Transformer (performance drop is significant though).", "However, the speedup from SCAN is limited, as it computes cross-attention between each query and all images.", "On the other hand, LightningDOT is 639 faster than UNITER on Flickr30K.", "When tested with 5 times more images in COCO, the speedup from LightningDOT is 1927 .", "Even with re-ranking, LightningDOT is still much more efficient than UNITER-base (46 faster on Flickr30K and 95 faster on COCO).", "To mimic a real-life scenario for image retrieval, where the candidate pool contains hundreds of thousands of images, we combine all images from training, validation and test set to form a larger candidate pool.", "Note that models are still trained on the training set.", "Although the number of text queries remain the same, the number of candidate images scales up by >20 , where cross-attention methods immediately become impractical.", "We refer this setting on both benchmarks as Flickr30k-full (31k) and COCO-full (123k).", "Our algorithm is 6,591 faster on Flickr30k-full and 23,869 faster on COCO-full, which clearly shows the advantage of LightningDOT and its potential in real-world applications.", "With re-ranking, LightningDOT is still more than 1,000 and 2,000 faster on Flickr30k-full and COCO-full, respectively.", "In general, for other re-rankers such as OSCAR, our algorithm can approximately speed up inference by N images /M times, where N images is the number of candidate images, and M is number of re-ranked images from topM retrieved results by LightningDOT.", "Similarly, we construct a full setting for text retrieval by combining all text queries from training, validation and test set.", "Results are summarized in Table 2. Considering the size of candidate pool has become more than 20 larger, we adopt recall at top 5, 10, 50 as evaluation metrics.", "Our method achieves reasonably good performance, with AR of 44.4 on COCO and 70.2 on Flickr30K.", "Re-ranking further lifts AR to 56.4 and 76.2.", "Results from UNITER or SCAN are not included as the computation of pairwise scores is extremely expensive, given the excessive amount of retrieval candidates.", "While LightningDOT only takes minutes to evaluate, UNITER-base is estimated to take about 28 days 9 to evaluate under the full setting for both 9 This estimation is based on the inference time taken by Text Retrieval Image Retrieval Method R@1 R@5 R@10 R@1 R@5 R@10 AR R-CNN only 62.2 85.9 91.1 42.0 70.9 80.3 72.1 +Image Encoder 73.4 92.5 95.6 59.5 84.5 90.3 82.6 +PT 83.5 96.4 98.7 68.6 90.5 94.8 88.8 LightningDOT 85.2 96.4 98.7 69.9 90.4 94.5 89.2 Table 4: Ablation studies on model design over Flickr30K validation set.", "In addition, We compare all models with the same setting: cache as much as possible for fastest speed, where our model outperforms others in both speed and space on image retrieval.", "The proposed algorithm maps each image to a 768-dimensional vector, which only consumes about 300Mb storage space for the whole COCO dataset.", "For cross-attention models such as SCAN, UNITER or OSCAR, they also need to cache image features, which typically requires to save a 36 x 2048 dimensional vector per image, and it consumes about 28GB storage space for COCO dataset.", "We conduct ablation studies on Flickr30K (Table 4) and compare LightningDOT (L4) against 3 ablated instances: ( i )R-CNN only", "(L1): image representations are extracted from Faster R-CNN directly, with no image encoder applied;", "( ii )", "+Im-age Encoder", "(L2): regional features are encoded with a 12-layer Transformer as the image encoder;", "( iii )", "+PT (L3): our model is pre-trained with MLM+MRM+CMR, then finetuned on Flickr30K.", "Note that the difference between MLM vs. VMLM and MRM vs. SMRM is whether the predictions of masked tokens (regions) rely on infused embeddings from the other modality.", "UNITER-base on a smaller dataset.", "Results show that R-CNN only is not suffi-cient in learning good image representations for ITR task, while image encoder with Transformer architecture can effectively learn contextualized image representations, hence achieving better performance.", "Pre-trained models (L3-4) generally achieve better performance, compared to non-pretrained models (L1-2).", "Comparing +PT to the full instance of LightningDOT, dependency on the other modality in VMLM and SMRM brings universal performance lift across all metrics.", "This indicates that these cross-modal dependencies introduced by VMLM and SMRM are effective in learning the association between image and text inputs.", "In addition, we investigate the effectiveness of each pre-training task in Table 5.", "Comparing to baseline without pre-training, pre-training with CMR alone lifts +1 .", "4 on AR.", "Pre-training with all three tasks achieves the best performance, indicating that the learning of contextualized word and region representations promotes better global alignment between image and text, and these three pre-training tasks work collaboratively to yield better visual-semantic embeddings.", "We further report results on multilingual image-text retrieval tasks.", "Specially, we evaluate LightningDOT under the translate-test setting, which is to translate the test captions in other languages to English by leveraging Machine Translation (MT) tool.", "10 Note that our method is only trained on English captions, without exploiting the original or translated captions from multilingual benchmarks.", "We consider two benchmarks: Multi30K (Elliott et al., 2016, 2017; Barrault et al., 2018) with captions in German, French and Czech; and COCO Japanese (Yoshikawa et al., 2017) and Chinese (Li et al., 2019b).", "Average Recall (AR) is used as the evaluation metric.", "Meta-Ave, the average of AR over different languages across two benchmarks, is used as a global metric.", "More details on multilingual ITR benchmarks are included in Appendix.", "We compare LightningDOT against 3 task-specific methods: S-LIWE (Wehrmann et al., 2019), MULE (Kim et al., 2020) and SMALR (Burns et al., 2020), which all exploit captions in different languages to learn multilingual or language-agnostic word embeddings.", "We also compare with a pre-trained model M 3 P (Huang et al., 2020a), which is alternatively pre-trained with image-caption pairs labeled in English and cross-lingual corpus in 100 different languages.", "Note that all methods discussed above are trained/finetuned on captions in different languages.", "For fair comparison, we report performance of UNITER under the same translate-test setting, which is finetuned with English captions only and tested on translated captions.", "Table 6 shows similar trends of performance improvements as on English benchmarks.", "Compared to both state-of-the-art task-specific methods and pre-trained models, LightningDOT under translate-test setting achieves new state of the art on most languages and establishes a strong baseline for future study on these multilingual benchmarks.", "We show an example of image retrieval results here at figure 4 for query as \"Sky view of a blue and yellow biplane flying near each other\".", "In addition to the ground truth image in the red rectangle, all the 10 images retrieved by our model are valid retrieval since multiple keywords (\"sky\", \"blue\", \"yellow\", \"airplane\", \"near\") are captured for each image.", "Please see the appendix A.4 for more examples.", "In this paper, we propose a pre-training framework that learns joint visual-semantic embedding without any cross-attention between modalities.", "LightningDOT outperforms previous state of the art, while significantly speeding up inference time by 600-2000 on Flickr30K and COCO image-text retrieval benchmarks.", "Future work includes extending the efficient training framework to other V+L tasks." ]
[ "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain" ]
[ "Event Detection (ED) aims to recognize mentions of events (i.e., event triggers) and their types in text.", "Recently, several ED datasets in various domains have been proposed.", "However, the major limitation of these resources is the lack of enough training data for individual event types which hinders the efficient training of data-hungry deep learning models.", "To overcome this issue, we propose to exploit the powerful pre-trained language model GPT-2 to generate training samples for ED.", "To prevent the noises inevitable in automatically generated data from hampering training process, we propose to exploit a teacher-student architecture in which the teacher is supposed to learn anchor knowledge from the original data.", "The student is then trained on combination of the original and GPT-generated data while being led by the anchor knowledge from the teacher.", "Optimal transport is introduced to facilitate the anchor knowledge-based guidance between the two networks.", "We evaluate the proposed model on multiple ED benchmark datasets, gaining consistent improvement and establishing state-of-the-art results for ED.", "An important task of Information Extraction (IE) involves Event Detection (ED) whose goal is to recognize and classify words/phrases that evoke events in text (i.e., event triggers).", "For instance, in the sentence The organization donated 2 million dollars to humanitarian helps. , ED systems should recognize donated as an event trigger of type Pay .", "We differentiate two subtasks in ED, i.e., Event Identification (EI): a binary classification problem to predict if a word in text is an event trigger or not, and Event Classification (EC): a multi-class classification problem to classify event triggers according to predefined event types.", "extending from feature-based models (Ahn, 2006; Liao and Grishman, 2010a; Miwa et al., 2014) to advanced deep learning methods (Nguyen and Gr-ishman, 2015; Chen et al., 2015; Nguyen et al., 2016c; Sha et al., 2018; Zhang et al., 2020b; Nguyen et al., 2021).", "Although deep learning models have achieved substantial improvement, their requirement of large training datasets together with the small sizes of existing ED datasets constitutes a major hurdle to build high-performing ED models.", "Recently, there have been some efforts to enlarge training data for ED models by exploiting unsupervised (Huang et al., 2016; Yuan et al., 2018) or distantly-supervised (Keith et al., 2017; Nguyen and Nguyen, 2018; Araki and Mitamura, 2018) techniques.", "The common strategy in these methods is to exploit unlabeled text data that are rich in event mentions to aid the expansion of training data for ED.", "In this work, we explore a novel approach for training data expansion in ED by leveraging the existing pre-trained language model GPT-2 (Rad-ford et al., 2019) to automatically generate training data for models.", "Motivated by the promising performance of GPT models for text generation, we expect our approach to produce effective data for ED in different domains.", "Specifically, we aim to fine-tune GPT-2 on existing training datasets so it can generate new sentences annotated with event triggers and/or event types, serving as additional training data for ED models.", "One direction to achieve this idea is to explicitly mark event triggers along with their event types in sentences of an existing ED dataset that can be used to fine-tune the GPT model for new data generation.", "However, one issue with this direction is that in existing ED datasets, numbers of examples for some rare event types might be small, potentially leading to the poor tuning performance of GPT and impairing the quality of generated examples for such rare events.", "In addition, large numbers of event types in some ED datasets might make it more challenging for the fine-tuning of GPT to differentiate event types and produce high-quality data.", "To this end, instead of directly generating data for ED, we propose to use GPT-2 to only generate samples for the event identification task to simplify the generation and achieve data with better annotated labels (i.e., output sentences only are only marked with positions of event triggers).", "As such, to effectively leverage the generated EI data to improve ED performance, we propose a multitask learning framework to train the ED models on the combination of the generated EI data and the original ED data.", "In particular, for every event trigger candidate in a sentence, our framework seeks to perform two tasks, i.e., EI to predict a binary label for being an event trigger or not, and ED to predict the event type (if any) evoked by the word via a multi-class classification problem.", "An input encoder is shared for both tasks that allow training signals from both generated EI data and original ED data to contribute to the representation learning in the encoder (i.e., transferring knowledge in generated EI data to ED models).", "Despite the simplification to EI for better annotated labels of data, the generated sentences might still involve noises due to the inherent nature of the language generation, e.g., grammatically wrong sentences, inconsistent information, or incorrect event trigger annotations.", "As such, it is crucial to introduce mechanisms to filter the noises in generated data to enable effective transfer learning from generated EI data.", "To this end, prior works for GPT-based data generation for other tasks has attempted to directly remove noisy generated examples before actual usage for model training via some heuristic rules (Anaby-Tavor et al., 2020; Yang et al., 2020).", "However, heuristic rules are brittle and restricted in their coverage so they might overly filter the generated data or incorrectly retain some noisy generated samples.", "To address this issue, we propose to preserve all generated data for training and devise methods to explicitly limit impacts of noisy generated sentences in the models.", "In particular, we expect the inclusion of generated EI data into the training process for ED models might help to shift the representations of the models to better regions for ED.", "As such, we argue that this representation transition should only occur at a reasonable rate as drastic divergence of representations due to the generated data might be associated with noises in the data.", "Motivated by this intuition, we propose a novel teacher-student framework for our multi-task learning problem where the teacher is trained on the original clean ED datasets to induce anchor representation knowledge for data.", "The student, on the other hand, will be trained on both generated EI data and original ED data to accomplish transfer learning.", "Here, the anchor knowledge from the teacher will be leveraged to guide the student to prevent drastic divergence of representation vectors for noisy information penalization.", "Consequently, we propose a novel anchor information to implement this idea, seeking to maintain the same level of differences between the generated and original data (in terms of representation vectors) for both the teacher and the student (i.e., generated-vs-original data difference as the anchor).", "At the core of this techniques involves the computation of dis-tance/difference between samples in generated and original data.", "In this work, we envision two types of information that models should consider when computing such distances for our problem: (1) representation vectors of the models for the examples, and (2) event trigger likelihood scores of examples based on the models (i.e., two examples in the generated and original data are more similar if they both correspond to event triggers).", "As such, we propose to cast this distance computation problem of generated and original data into an Optimal Transport (OT) problem.", "OT is an established method to compute the optimal transportation between two data distributions based on the probability masses of data points and their pair-wise distances, thus facilitating the integration of the two criteria of event trigger likelihoods and representation vectors into the distance computation between data point sets.", "Extensive experiments and analysis reveal the effectiveness of the proposed approach for ED in different domains, establishing new state-of-the-art performance on the ACE 2005, CySecED and RAMS datasets.", "We formulate the task of Event Detection as a word-level classification problem as in prior work (Nguyen and Grishman, 2015; Ngo et al., 2020).", "Formally, given the sentence S = [ w 1 , w 2 , . . . , w n ] and the candidate trigger word w t , the goal is to predict the event type l from a pre-defined set of event types L .", "Note that if the word w t is not a trigger word, the gold event type is None .", "Our proposed approach for this task consist of two stages: (1) Data Augmentation: to employ natural language generation to augment existing training datasets for ED, (2) Task Modeling: to propose a deep learning model for ED, exploiting available training data.", "As presented in the introduction, our motivation in this work is to explore a novel approach for training data augmentation for ED based on the powerful pre-trained language model for text generation GPT2.", "Our overall strategy involves using some existing training dataset O for ED (i.e., original data) to fine-tune GPT-2.", "The fine-tuned model is then employed to generate a new labeled training set G (i.e., synthetic data) that will be combined with the original data O to train models for ED.", "To simplify the training data generation task and enhance the quality of the synthetic data, we seek to generate data only for the subtask EI of ED where synthesized sentences are annotated with positions of their event triggers (i.e., event types for triggers are not required for the generation to avoid the complication with rare event types for fine-tuning).", "To this end, we first enrich each sentence S O with positions of event triggers that it contains to facilitate the GPT fine-tuning process.", "Formally, assume that S = w 1 , w 2 , . . . , w n is a sentence of n words with only one event trigger word located at w t , the enriched sentence S (cid:48) for S would have the form: S (cid:48) = [ BOS, w 1 , . . . , T RG s , w t , T RG e , . . . , w n , EOS ] where T RG s and T RG e are special tokens to mark the position of the event trigger, and BOS and EOS are special tokens to identify the beginning and the end of the sentence.", "Next, the GPT-2 model will be fine-tuned on the enriched sentences S (cid:48) of O in an auto-regressive fashion (i.e., predicting the next token in S (cid:48) given prior ones).", "Finally, using the fine-tuned GPT-2, we generate a new dataset G of |O| sentences ( |G| = |O| ) to achieve a balanced size.", "Here, we ensure that only generated sentences that contain the special tokens T RG s and T RG e (i.e., involving event trigger words) are added into G , allowing us to identify the candidate trigger word in our word-level classification formulation for ED.", "As such, the combination A of the synthetic data G and the original data O ( A = O G ) will be leveraged to train our ED model in the next step.", "by the fine-tuned GPT-2 model over the popular ACE 2005 training set for ED) and evaluate them regarding grammatical soundness, meaningfulness, and inclusion and correctness of annotated event triggers (i.e., whether the words between the tokens T RG s and T RG e evoke events or not).", "Among the sampled set, we find that 17% of the sentences contains at least one type of such errors.", "This section describes our model for ED to overcome the noises in the generated data G for model training.", "As discussed in the introduction, we employ the Teacher-Student framework with multitask learning to achieve this goal.", "In the proposed framework, the teacher and student employs a base deep learning model with the same architecture and different parameters.", "Base Model : Following the prior work (Wang et al., 2019), our base model consists of the BERT base model to represent each word w i in the input sentence S with a vector e i .", "Formally, the input sentence [[ CLS ] , w 1 , w 2 , . . . , w n , [ SEP ]] is fed into the BERT base model and the hidden states of the last layer of BERT are taken as the contextualized embeddings of the input words, i.e., E = [ e 1 , e 2 , . . . , e n ] .", "Note that if w i contains more than one word-piece, the average of its word-piece embeddings is used for e i .", "In our experiments, we find that fixing the BERT base parameters achieve higher performance.", "As such, to fine-tune the contextualized embeddings E for ED, we employ a Bi-directional Long Short-Term Memory (BiL-STM) network to consumes E ; its hidden states, i.e., H = [ h 1 , h 2 , . . . , h n ] , are then employed as the final representations for the words in S .", "Finally, to create the final vector V for ED prediction, the max-pooled representation of the sentence, i.e., h = MAX P OOL ( h 1 , h 2 , . . . , h n ) , is concatenated with the representation of the trigger candidate, i.e., h t .", "V is consumed by a feed-forward network, whose last layer has | L | neurons, followed by a softmax layer to predict the distribution P ( | S, t ) over possible event types in L .", "To train the model, we use negative log-likelihood as the loss function: L pred = log P ( l | S, t ) where l is the gold label.", "As the synthetic sentences in G only involve information about positions of event triggers (i.e., no event types included), we cannot directly combine G with O to train ED models with the loss L pred .", "To facilitate the integration of G into the training process, we introduce an auxiliary task of EI for the multi-task learning in the training process, seeking to predict the binary label l aux for the trigger candidate w t in S , i.e., l aux = 1 if w t is an event trigger.", "To perform this auxiliary task, we employ another feed-forward network, i.e., FF aux , which also consumes the overall vector V as input.", "This feed-forward network has one neuron with the sigmoid activation function in the last layer to estimate the event trigger likelihood score: P ( l aux = 1 | S, t ) = FF aux ( V ) .", "Finally, to train the base model with the auxiliary task, we exploit the binary cross-entropy loss: L aux = ( l aux log( FF aux ( V )) + (1 l aux ) log(1 FF aux ( V ))) .", "Note that the main ED task and the auxiliary EI task are done jointly in a single training process where the loss L pred for ED is computed only for the original data O .", "The loss L aux , in contrast, will be obtained for both original and synthetic data in A .", "Knowledge Consistency : The generated data G is not noise-free.", "As such, training the ED model on A could lead to inferior performance.", "To address this issue, as discussed in the introduction, we propose to first learn the anchor knowledge from the original data O , then use that to lead the model training on A to prevent drastic divergence from the anchor knowledge (i.e., knowledge consistency promotion), thus constraining the noises.", "Hence, we propose a teacher-student network, in which the teacher is first trained on O to learn the anchor knowledge.", "The student network will be trained on A afterward leveraging the consistency guidance with the induced anchor knowledge from the teacher.", "We will also use the student network as the final model for our ED problem in this work.", "In our framework, both teacher and student networks will be trained in the multi-task setting with ED and EI tasks.", "In particular, the training losses for both ED and EI will be computed based on O for the teacher (the loss to train the teacher is: L pred + L aux where is a trade-off parameter).", "In contrast, the combined data A will be used to compute the EI loss for the student while the ED loss for the student can only be computed on the original data O .", "As such, we propose to enforce the knowledge consistency between the two networks for both the main task ED and the auxiliary task EI during the training of the student model.", "First, to achieve the knowledge consistency for ED, we seek to minimize the KL divergence between the teacher-predicted label-probability distribution and the student-predicted label-probability distributions.", "Formally, for a sentence S O , the label-probability distributions of the teacher and the student, i.e., P t ( | S, t ) and P s ( | S, t ) respectively, are employed to compute the KL-divergence loss LKL = l LP t ( l | S, t ) log( P t ( l | S,t ) P s ( l | S,t ) ) .", "By decreasing the KL-divergence during the student's training, the model is encouraged to make similar predictions as the teacher for the same original sentence, thereby preventing noises to mislead the student.", "Note that different from traditional teacher-student networks that employ KL to achieve knowledge distillation on unlabelled data (Hinton et al., 2015), the KL divergence in our model is leveraged to enforce knowledge consistency to prevent noises in labeled data automatically generated by GPT-2.", "Second, for the auxiliary task EI, instead of enforcing the student-teacher knowledge consistency via similarity predictions, we argue that it will be more beneficial to leverage the difference between the original data O and the generated data G as an anchor knowledge to promote consistency.", "In particular, we expect that the student which is trained on A , should discern the same difference between G and O as the teacher which is trained only on the original data O .", "Formally, during student training, for each mini-batch, the distances between the original data and the generated data detected by the teacher and the student are denoted by d T O , G and d S O , G , respectively.", "To enforce the O G distance consistency between the two networks, the following loss is added into the overall loss function: L dist = | d T O , G d S O , G | | B | , where | B | is the mini-batch size.", "The advantage of this novel knowledge consistency enforcement compared to the KL-divergence is that it explicitly exploits the different nature of the original and generated data to facilitate the mitigation of noises in the generated data.", "A remaining question for our proposed knowledge consistency concerns how to assess the difference between the original and the generated data from the perspective of the teacher, i.e., d T O , G , and the student networks, i.e., d S O , G .", "In this section, we will describe our method from the perspective of the student (the same method is employed for the teacher network).", "In particular, we define the difference between the original and the generated data as the cost of transforming O to G such that for the transformed data the model will make the same predictions as G .", "How can we compute the cost of such transformation?", "To answer this question, we propose to employ Optimal Transport (OT) which is an established method to find the efficient transportation (i.e., transformation with the lowest cost) of one probability distribution to another one.", "Formally, given the probability distributions p ( x ) and q ( y ) over the domains X and Y , and the cost function C ( x, y ) : X Y R + for mapping X to Y , OT finds the optimal joint distribution ( x, y ) (over X Y ) with marginals p ( x ) and q ( y ) , i.e., the cheapest transportation from p ( x ) to q ( y ) , by solving the following problem: ( x, y ) = min ( x,y ) (cid:90) Y (cid:90) X ( x, y ) C ( x, y ) dxdy s.t. x p ( x ) and y q ( y ) , (1) where ( x, y ) is the set of all joint distributions with marginals p ( x ) and q ( y ) .", "Note that if the distributions p ( x ) and q ( y ) are discrete, the integrals in Equation 1 are replaced with a sum and the joint distribution ( x, y ) is represented by a matrix whose entry ( x, y ) represents the probability of transforming the data point x X to y Y to convert the distribution p ( x ) to q ( y ) .", "By solving the problem in Equation 1 1 , the cost of transforming the discrete distribution p ( x ) to q ( y ) (i.e., Wasserstein distance Dist W ) is defined as: Dist W = x X y Y ( x, y ) C ( x, y ) .", "In order to utilize OT to compute the transformation cost between O and G , i.e., d S O , G , we propose to define the domain X and Y as the representation spaces of the sentences in O and G , respectively, obtained from the student network.", "In particular, a data point x X represents a sentence X o O .", "Similarly, a data point y Y stands for a sentence Y g G .", "To define the cost function C ( x, y ) for OT, we compute the Euclidean distance between the representation vectors of the sentences X o and Y g (obtained by max-pooling over representations of their words): C ( x, y ) = (cid:13) (cid:13) h Xo h Yg (cid:13) (cid:13) where h Xo = MAX P OOL ( h Xo, 1 , . . . , h Xo, | X o | ) , h Yg = MAX P OOL ( h Yg, 1 , . . . , h Yg, | Y g | ) , and h Xo,i and h Yg,i are the representation vectors of the i th words of X o and Y g , respectively, obtained from the student's BiLSTM.", "Also, to define the discrete distribution p ( x ) for OT over X , we employ the event trigger likelihood Score Xo for the trigger candidate of each sentence X o in X that is returned by the feed-forward network FFS aux 1 It is worth mentioning that this problem is intractable so we solve its entropy-based approximation using the Sinkhorn algorithm (Peyre and Cuturi, 2019).", "for the auxiliary task EI in the student model, i.e, Score Xo = FF Saux ( X o ) .", "Afterward, we apply the softmax function over the scores of the original sentences in the current mini-batch to obtain p ( x ) , i.e., p ( x ) = Softmax ( Score Xo ) .", "Similarly, the discrete distribution q ( y ) is defined as q ( y ) = Softmax ( Score Yg ) .", "To this end, by solving the OT problem in Equation 1 and obtaining the efficient transport plan ( x, y ) using this setup, we can obtain the distance d S O , G .", "In the same way, the distance d T O , G can be computed using the representations and event trigger likelihoods from the teacher network.", "Note that in this way, we can integrate both representation vectors of sentences and event trigger likelihoods into the distance computation between data as motivated in the introduction.", "Finally, to train the student model, the following combined loss function is used in our framework: L = L pred + L aux + LKL + L dist , where , , and are the trade-off parameters.", "To evaluate the effectiveness of the proposed model, called the GPT-based data augmentation model for ED with OT (GPTEDOT), we conduct experiments on the following ED datasets:", "ACE 2005 (Walker et al., 2006): This dataset annotates 599 documents for 33 event types that cover different text domains(e.g., news, weblog or conversation documents).", "We use the same preprocessing script and data split as prior works (Lai et al., 2020c; Tong et al., 2020b) to achieve fair comparisons.", "In particular, the data split involves 529/30/40 articles for train/dev/test sets respectively.", "For this dataset, we compare our model with prior state-of-the-art models reported in the recent works (Lai et al., 2020c; Tong et al., 2020b), including BERT-based models such as DMBERT, AD-DMBERT (Wang et al., 2019), DRMM, EKD (Tong et al., 2020b), and GatedGCN (Lai et al., 2020c).", "CySecED (Man Duc Trong et al., 2020): This dataset provides 8,014 event triggers for 30 event types from 300 articles of the cybersecurity domain (i.e., cybersecurity events).", "We follow the the same pre-processing and data split as the original work (Man Duc Trong et al., 2020) with 240/30/30 documents for the train/dev/test sets.", "To be consistent with other experiments and facilitate the data generation based on GPT-2, the experiments on CySecED are conducted at the sentence level where inputs for models involve sentences.", "As such, we employ the state-of-the-art sentence-level models reported in (Man Duc Trong et al., 2020), i.e., DMBERT (Wang et al., 2019), BERT-ED (Yang et al., 2019), as the baselines for CySecED.", "RAMS (Ebner et al., 2020): This dataset annotates 9,124 event triggers for 38 event types.", "We use the official data split with 3,194, 399, and 400 documents for training, development, and testing respectively for RAMS.", "We also perform ED at the sentence level in this dataset.", "For the baselines, we utilize recent state-of-the-art BERT-based models for ED, i.e., DMBERT (Wang et al., 2019) and GatedGCN (Lai et al., 2020c).", "For a fair comparison, the performance of such baseline models is obtained via their official implementations from the original papers that are fine-tuned for RAMS.", "For each dataset, we use its training and development data to fine-tune the GPT-2 model.", "We tune the hyperparameters for the proposed teacher-student architecture using a random search.", "All the hyperparameters are selected based on the F1 scores on the development set of the ACE 2005 dataset.", "The same hyper-parameters from this fine-tuning are then applied for other datasets for consistency.", "In our model we use the small version of GPT-2 to generate data.", "In the base model, we use BERT base , 300 dimensions in the hidden states of BiLSTM and 2 layers of feed-forward neural networks with 200 hidden dimensions to predict events.", "The trade-off parameters , , and are set to 0.1, 0.1, 0.05, and 0.08, respectively.", "The learning rate is set to 0.3 for the Adam optimizer and the batch size of 50 are employed during training.", "Finally, note that we do not update the BERT model for word embeddings in this work due to its better performance on the development data of ACE 2005.", "Results of experiments on the ACE 2005 test set are shown in Table", "1. The most important observation is that the proposed model GPTEDOT significantly outperforms all the baseline models ( p < 0 . 01 ), thus showing the benefits of GPT-generated data and the teacher-student framework with knowledge consistency for ED in this work.", "In particular, compared to the BERT-based models that leverage data augmentation, i.e., AD-DMBERT (Wang et al., 2019) with semi-supervised and adversarial Model P R F1 CNN (Nguyen and Grishman, 2015) 71.8 66.4 69.0 DMCNN (Chen et al., 2015) 75.6 63.6 69.1 DLRNN (Duan et al., 2017) 77.2 64.9 70.5 ANN-S2 (Liu et al., 2017) 78.0 66.3 71.7 GMLATT (Liu et al., 2018) 78.9 66.9 72.4 GCN-ED (Nguyen and Grishman, 2018) 77.9 68.8 73.1 Lu's DISTILL (Lu et al., 2019) 76.3 71.9 74.0 TS-DISTILL (Liu et al., 2019) 76.8 72.9 74.8 DMBERT* (Wang et al., 2019) 77.6 71.8 74.6 AD-DMBERT* (Wang et al., 2019) 77.9 72.5 75.1 DRMM* (Tong et al., 2020a) 77.9 74.8 76.3 GatedGCN* (Lai et al., 2020c) 78.8 76.3 77.6 EKD* (Tong et al., 2020b) 79.1 78.0 78.6 GPTEDOT* 82.3 76.3 79.2 Table 1: Performance on the on ACE 2005 test set.", "learning, DRMM (Tong et al., 2020a) with image-enhanced models, and EKD (Tong et al., 2020b) with external open-domain event triggers, the better performance of GPTEDOT highlights the advantages of GPT-2 to generate data for ED models.", "Results of experiments on the CySecED test set are presented in Table", "2. This table reveals that the teacher-student architecture GPTEDOT significantly improves the performance over previous state-of-the-art models for ED in cybersecurity domain.", "This is important as it shows that the proposed model is effective in different domains.", "In addition, our results also suggest that GPT-2 can be employed to generate effective data for ED in domains where data annotation for ED requires extensive domain expertise and expensive cost to obtain such as the cybersecurity events.", "Moreover, the higher margin of improvement for GPTEDOT on CySecED compared to the those on the ACE 2005 dataset suggests the necessity of using more training data for ED in technical domains.", "Finally, results of experiments on the RAMS test set are reported in Table", "3. Consistent with our experiments on ACE 2005 and CySecED, our proposed model achieve significantly higher performance than existing state-of-the-art models ( p < 0 . 01 ), thus further confirming the advantages of GPTEDOT for ED.", "This ablation study evaluates the effectiveness of different components in GPTEDOT for ED.", "First, for the importance of the generated data G from GPT-2 and the teacher-student architecture to mitigate noises, we examine the following baselines: (1) Base O : The baseline is the base model trained only on the original data O , thus being equivalent to the teacher model and not using the student model; and (2) Base A : This baseline trains the base model on the combination of the original and generated data, i.e., A , using the multi-learning setting (i.e., the teacher model is excluded).", "Second, for the multi-task learning design in the teacher network, we explore the following ablated models: (3) Teacher A : This baseline removes the auxiliary task EI in the teacher from GPTEDOT.", "As such, the OT-based knowledge consistency for EI is also eliminated; (4) Teacher M : In this model, the main task ED is utilize to train the teacher, so the corresponding KL-based knowledge consistency for ED is also removed.", "Third, for the design of the knowledge consistency losses in the student network, we evaluate the following baselines: (5) Student OT : This ablated model eliminates the OT-based knowledge consistency loss for the auxiliary task EI in the student's training of GPTEDOT (the auxiliary task is still employed for the teacher and the student); (6) Student KL : For this model, the KL-based knowledge consistency for the main task ED is ignored in the student's training; (7) Student + OT : In this baseline, we use OT for the knowledge consistency on both the main and the auxiliary tasks.", "Here, for the main task ED, the cost function C ( x, y ) for OT is still obtained via the Euclidean distances between representation vectors while the distributions p ( x ) and p ( y ) are based on the maximum probabilities of the label-probability distributions P s ( . | X o , t o ) and P s ( Y g , t g ) for the ED task; and (8) Student + KL : This baseline employs the KL di-Model P R F1 GPTEDOT (full) 82.4 75.0 78.5 Base O 78.2 73.7 75.9 Base A 75.8 73.9 74.9 Teacher A 76.9 78.1 77.5 Teacher M 75.8 77.9 76.9 Student OT 75.4 79.3 77.3 Student KL 76.8 77.3 77.0 Student + OT 76.1 76.6 76.4 Student + KL 77.1 76.7 76.9 OT Rep 76.8 77.3 77.0 OT Score 78.0 77.1 77.6 Table 4: Ablation study on the ACE 2005 dev set.", "vergence between models' predicted distributions to enforce the teacher-student consistency for both the main task and the auxiliary task.", "To this end, for the auxiliary task EI, we convert the final activation of FF aux into a distribution with two data points (i.e., [ FF aux ( X ) , 1 FF aux ( X )] ) to compute the KL divergence between the teacher and the student.", "Finally, for the importance of Euclidean distances and event trigger likelihoods in the OT-based distance between O and G for knowledge consistency in EI, we investigate two baselines: (9) OT Rep : Here, to compute OT, we use con-stant cost between every pair of sentences, i.e., C ( x, y ) = 1 (i.e., ignoring representation-based distances); and (10) OT Score : This model uses uniform distributions for p ( x ) and q ( y ) to compute the OT (i.e., ignoring event trigger likelihoods).", "We report the performance of the models (on the ACE 2005 development set) for the ablation study in Table", "4. There are several observations from this table.", "First, the generated data G and the teacher-student architecture are necessary for GPTEDOT to achieve the highest performance.", "In particular, comparing with Base O , the better performance of GPTEDOT indicates the benefits of the GPT-generated data.", "Moreover, the better performance of Base O over Base A reveals that the simple combination of the synthetic and original data without any effective method to mitigate noises might be harmful.", "Second, the lower performance of Teacher A and Teacher M shows that both the auxiliary and the main task (i.e., multi-task learning) in the teacher are integral to produce the best performance.", "Third, the choice of methods to promote knowledge consistency is important and the proposed combination of KL and OT for the ED and EI tasks (respectively) are necessary.", "In particular, removing or replacing each of them with the other one (i.e., Student + OT and Student + KL ) would de-Dataset Sentence ACE 2005 I was totally shocked by the court's decision to agree with Sam Sloan after he TRG s sued TRG e his children.", "crease the performance significantly.", "Finally, in the proposed consistency method based on OT for EI, it is beneficial to employ both representation-level distances (i.e., OT Rep ) and models' predictions for event trigger likelihoods (i.e., OT Score ) as removing any of them hurts the performance.", "To provide more insights into the quality of the synthetic data G , we provide samples of sentences that are generated by the fine-tuned GPT-2 model on each dataset in Table", "5. This table illustrates that the generated sentences also belong to the domains of the original data (i.e., the cybersecurity domain).", "As such, combining synthetic data with original data is promising for improving ED performance as demonstrated in our experiments.", "As discussed earlier, the generated data G is not free of noise.", "In order to better understand the types of errors existing in generated sentences, we manually assess 200 sentences randomly selected from the set G generated by the fine-tuned GPT-2 model on the ACE 2005 dataset.", "We categorize the errors into five types and provide their proportions along with example for each error type in Table", "6. This table shows that the majority of errors are due to missing labels (i.e., no special tokens TRG s and TRG e are generated) or incorrect labels (i.e., marked words are not event triggers of interested types) generated by the language model.", "Finally, to study the importance of the size of the generated data to augment training set for ED, we conduct an experiment in which different numbers of generated samples in G (for the ACE 2005 dataset) are combined with the original data O .", "The results are shown in Table", "7. According to this table, the highest performance of the proposed model is achieved when the numbers of the generated and original data are equal.", "More specifically, decreasing the number of generated samples potentially limits the benefits of data augmentation.", "On the other hand, increasing the size of generated data might introduces extensive noises and become harmful to the ED models.", "Early methods for ED have employed feature-based techniques (Ahn, 2006; Ji and Grishman, 2008; Patwardhan and Riloff, 2009; Liao and Grish-man, 2010a,b; Hong et al., 2011; McClosky et al., 2011; Li et al., 2013; Miwa et al., 2014; Yang and Mitchell, 2016).", "Later, advanced deep learning methods (Nguyen and Grishman, 2015; Chen et al., 2015; Nguyen et al., 2016a,b; Sha et al., 2018; Zhang et al., 2019; Yang et al., 2019; Nguyen and Nguyen, 2019; Zhang et al., 2020b) have been applied for ED.", "One challenge for ED research is the limited size of existing datasets that hinder the training of effective models.", "Prior works have attempted to address this issue via unsupervised (Huang et al., 2016; Yuan et al., 2018), semi-supervised (Liao and Grishman, 2010a; Huang and Riloff, 2012; Ferguson et al., 2018), distantly supervised (Keith et al., 2017; Nguyen and Nguyen, 2018; Zeng et al., 2017; Araki and Mitamura, 2018), and few/zero-shot (Huang et al., 2018; Lai et al., 2020a,b) learning.", "In this work, we propose a novel method to augment training data for ED by exploiting the powerful language model GPT-2 to automatically generate new samples.", "Leveraging GPT-2 for augmenting training data has also been studied for other NLP tasks recently (e.g., relation extraction, commonsense reasoning) (Papanikolaou and Pierleoni, 2020; Zhang et al., 2020a; Yang et al., 2020; Madaan et al., 2020; Bosselut et al., 2019; Kumar et al., 2020; Anaby-Tavor et al., 2020; Peng et al., 2020).", "However, none of those works has explored GPT-2 for ED.", "In addition, existing methods only resort to heuristics to filter out noisy samples generated by GPT-2.", "In contrast, we propose a novel differentiable method capable of preventing noises from diverging representation vectors of the models for ED.", "We propose a novel method for augmenting training data for ED using the samples generated by the language model GPT-2.", "To avoid noises in the generated data, we propose a novel teacher-student architecture in a multi-task learning framework.", "We introduce a mechanism for knowledge consistency enforcement to mitigate noises from generated data based on optimal transport.", "Experiments on various ED benchmark datasets demonstrate the effectiveness of the proposed method.", "This research has been supported by the Army Research Office (ARO) grant W911NF-21-1-0112 and the NSF grant CNS-1747798 to the IU-CRC Center for Big Learning.", "This research is also based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the Better Extraction from Text Towards Enhanced Retrieval (BETTER) Program.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, ODNI, IARPA, the Department of Defense, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", "This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "abstain", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "objective", "objective", "method", "abstain", "other", "other", "other", "other", "other" ]
[ "While large language models have shown exciting progress on several NLP benchmarks, evaluating their ability for complex analogical reasoning remains under-explored.", "Here, we introduce a high-quality crowdsourced dataset of narratives for employing proverbs in context as a benchmark for abstract language understanding.", "The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and contains minimal lexical overlaps between narratives and proverbs, ensuring that models need to go beyond surface-level reasoning to succeed.", "We explore three tasks: (1) proverb recommendation and alignment prediction, (2) narrative generation for a given proverb and topic, and (3) identifying narratives with similar motifs.", "Our experiments show that neural language models struggle on these tasks compared to humans, and these tasks pose multiple learning challenges.", "Large language models (LLMs) (Devlin et al., 2019; Liu et al., 2019a; Raffel et al., 2020; Lewis et al., 2020; Reimers and Gurevych, 2019; Sanh et al., 2019; Lan et al., 2020) have led to a paradigm shift in NLP, and have shown exciting progress on benchmarks such as GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a).", "In particular, these include tasks such as reading comprehension, natural language inference, and coreference resolution.", "Many of these tasks rely on semantic and syntactic reasoning, which has been mastered by these LLMs.", "For example, apart from improving on distributional semantics through contextual-ized embeddings (Ethayarajh, 2019), recent work has shown evidence that these models implicitly learn emergent concepts such as subject-verb agreement (Jawahar et al., 2019), semantic roles (Tenney et al., 2019) and dependency structures (Hewitt and Manning, 2019).", "However, humans show an ability for deeper linguistic reasoning.", "We can identify people's intentions and goals (Douglas and Sutton, 2006), perform relational reasoning (Alexander et al., 2016), and find analogies in situations with little surface overlap (Holyoak, 2013).", "In particular, making verbal analogies in the form of proverbs is noted as an indicator of literary ability (Penfield and Duru, 1988; Nippold et al., 2001).", "Proverbs are also repositories of information on culture, societal norms, values, and folk wisdom (Raymond, 1956; White, 1987).", "In this work, we investigate proverbs in narrative contexts as a testbed for evaluating abstract reasoning and analogical abilities of LLMs.", "Context), a high-quality crowdsourced dataset of narratives paired with proverbs.", "The dataset provides fine-grained annotation of aligned spans between proverbs and narratives, and is designed to minimize lexical overlap between narratives and proverbs.", "Figure 1 shows two examples of narratives for a proverb from our dataset, along with corresponding alignment annotations.", "We diverge from related extant resources (Wang et al., 2020; Tan et al., 2015, 2016) on using proverbs in terms of quality of narratives, direct supervision, and having fine-grained alignment annotations.", "1 We explore three tasks: (1) proverb and alignment prediction ( 5.1), (2) narrative generation for a given proverb and a set of keywords specifying a topic ( 5.2), and (3) discovering narratives with similar motifs ( 5.3).", "By benchmarking several LLMs, we find that existing models struggle with these tasks, suggesting much scope of improvement in abstract reasoning.", "In particular, humans show much higher performance in many cases.", "In 3, we describe the crowdsourced creation of the ePiC dataset.", "In 4, we analyze lexical overlap, biases, and narrative quality in ePiC .", "5 describes the three tasks and details of experimental evaluation of LLMs for each task.", "We conclude with a discussion, and a statement of ethics and broader impact relevant to our work.", "Our contributions are: We introduce ePiC , a high-quality dataset for employing proverbs in context.", "It contains multiple narratives for English proverbs and fine-grained annotation of aligned spans between them.", "We design three challenging tasks that require models to go beyond surface-level reasoning and provoke research towards making more socially grounded NLP systems.", "We benchmark the performance of several state-of-the-art large language models in our proposed tasks using our dataset.", "Our dataset and code are publicly available at: https://epic-benchmark.github.io 2 Related Work Prior works in figurative language understanding have explored a diverse set of topics, such as simile detection and generation (Niculae and Danescu-Niculescu-Mizil, 2014; Mpouli, 2017; Zeng et al., 2020; Chakrabarty et al., 2020), metaphor detection 1 Existing datasets are automatically created by scraping web-text, and supervision is heuristic (based on cooccurrences of proverbs and contexts) and generation (Dagan et al., 2005; Gao et al., 2018; Stowe et al., 2019, 2021; Chakrabarty et al., 2021b), pun identification (Poliak et al., 2018; Miller and Turkovic, 2016), and quote/proverb recommendation (Tan et al., 2015, 2016; Wang et al., 2020).", "Recent work (Chakrabarty et al., 2021a) has also focused on interpreting idioms and similes in narratives.", "Liu et al. (2019b) has explored recommending Chinese idioms through context-based recommendation and Zheng et al. (2019) formulated idiom recommendation as cloze-style reading comprehension task.", "Learning to quote has been explored based on fiction (Tan et al., 2015, 2016) and noisy social media conversations from Twitter, Red-dit or Weibo (Lee et al., 2016; Wang et al., 2020).", "In the most related prior work, authors explore a quote retrieval task borrowing inspiration from context based recommendation systems (Huang et al., 2012; He et al., 2010).", "Wang et al. (2020) formulated learning to quote as a generation task by using topic modeling (Miao et al., 2017; Wang et al., 2019c) in a sequence-to-sequence network.", "While previous work has considered idioms, proverbs and common phrases as quotes, we specifically work with proverbs.", "Compared to earlier datasets, our dataset is manually created and labeled.", "Further, ePiC includes fine-grained annotations aligning parts of proverb to parts of the narrative, which has significant possibilities for model training, evaluation and interpretability.", "creating the dataset in detail.", "Proverb collection: We obtained a candidate set of English proverbs by scraping websites of The Phrase Finder' 2 and WikiQuotes 3 .", "Next, this set was manually pruned to remove lexical variations of the same proverb.", "This manual curation led to a set of 250 proverbs, which we consider in the current version of our dataset.", "Narrative collection: In the second step, we use Amazon Mechanical Turk to collect a diverse set of narratives corresponding to each proverb.", "We collect 10 narratives contributed by distinct turkers for each proverb, leading to a total of 2500 proverb-narrative pairs.", "We also ensure that no turker contributes a large number of narratives to alleviate 2 https://www.phrases.org.uk/ 3 https://en.wikiquotes.org/wiki/ English_proverbs 3990 annotator bias (Geva et al., 2019) (where models can overfit to annotator characteristics) while encouraging diversity in writing style and content.", "The turkers were asked to write short realistic stories, preferably within 100 words.", "Additionally, to avoid surface-form biases, turkers were encouraged to minimize lexical overlap and to not mention the proverb or parts of it in the narrative.", "This was done so that doing well on the tasks requires a detailed understanding of the narratives rather than relying on surface-level cues.", "Turkers were paid 50 cents for each narrative for this task.", "Span alignment annotation: Next, we solicit fine-grained annotations between the narratives and the proverb in form of aligned spans.", "For this, we present proverb-narrative pairs to turkers asking them to find contiguous spans in the narrative which align well with contiguous spans in the proverb.", "Turkers could submit up to 5 pairs of aligned spans per proverb-narrative pair.", "These aligned spans highlight the grounding of a proverb in the narrative (see Figure 1).", "These annotations can help to verify the reasoning capabilities of various neural models by checking if these models are able to identify these correspondences, and add interpretability to our tasks.", "Turkers were paid 25 cents for each proverb-narrative pair annotation for this task.", "Statistics: Table 1 shows the statistics of narrative collection for the proverbs.", "The narrative writing task was perceived as challenging yet interesting by most turkers due to", "(a) not having outlines about topics for the narrative beforehand", "(b) requirement of low lexical overlap with the proverb.", "Thus, the narrative writing task had a learning curve and some of the narratives submitted initially were not included in the dataset.", "Table 2 shows some statistics of the dataset collected through the process described in 3.", "In this section, we analyze the characteristics and biases of the ePiC dataset in detail.", "Using n-grams : We evaluate the extent of lexical overlap between proverbs and narratives by computing common n-grams between them.", "Table 3 reports the average Jaccard similarity score between n-gram sets of proverbs and narratives, and the average number of common n-grams.", "On average, there are 1.27 unigrams common between narratives and proverbs (including stopwords).", "In comparison, randomly permuting assignments of proverbs for narratives yields an average unigram Jaccard similarity of 0.0211 and 1.06 common unigrams.", "Thus, the overlap metrics in the dataset are comparable to those between unrelated texts.", "To evaluate diversity among narratives corresponding to a proverb, we compute average Jaccard similarity between sets of unigrams for the narratives.", "This score is 0.107, which is comparable to a value of 0.098 for unigram overlap between pairs of narratives from different proverbs.", "This suggests a high lexical diversity between narratives.", "Using distributional embeddings : We explore if we can retrieve the correct proverb corresponding to a narrative only by using similarity in their distributional representations.", "The similarity between a proverb and a narrative is defined as the cosine similarity between the representation of the proverb and the narrative obtained using word2vec embeddings (Mikolov et al., 2013) or contextual embeddings from LLMs.", "Details of implementation are provided in Appendix F.1.", "For this retrieval task, we report the accuracy and Mean Reciprocal Rank of the correct proverb in 3991 LLM ACC .", "Table", "4. We note that while all models perform better than random (with Sentence-BERT performing the best), the performance is very low when using out-of-the-box representations.", "In 5, we explore learning-based methods for the same setup.", "Diversity of narrative events: Fig 2 shows", "distribution of events in our dataset.", "Following Mostafazadeh et al. (2016) we find events as the hyponyms of the word event' or process' using WordNet (Fellbaum, 2010).", "We see that the top events comprise less than 3% of all events in our dataset, and the long tail of less frequent events shows the diversity of the dataset.", "Sentiment analysis: To evaluate the presence of sentiment association bias between proverbs and corresponding narratives (e.g., if negative sentiment proverbs only correspond to negative sentiments in narratives), we perform sentiment analysis of the narratives using VADER (Hutto and Gilbert, 2014).", "Figure 3 shows the average sentiment scores of the narratives corresponding to a proverb plotted against the sentiment score of the proverb.", "We find that the narratives are diverse in terms of their senti-C RITERION ePiC [1] [2] Relatedness 3.91 3.15 3.92 Interesting/Creative 3.57 3.34 3.63 Fluency 3.98 3.23 3.80 Overall 3.68 3.15 3.66 Table 5: Averaged Likert scale ratings for data quality.", "ment polarities showing a weak positive correlation (Pearson correlation score 0.35) with the sentiment score of the proverbs.", "Figure 4 shows the variance Sentiment of proverb A v g .", "in terms of the number of positive and negative sentiment narratives (out of 10) for each proverb, showing a diverse spread of narrative sentiment polarities across proverbs.", "For additional details, please refer to Appendix A.", "We perform a few additional analyses on our dataset and found that (1) around 61% of mentions in the narratives were male, (2) diverse spread of reading complexity values in narratives measured using Fleisch reading ease 4 , and (3) absence of any hate speech in the narratives of our dataset.", "The detailed experiments for these analyses are given in Appendix A.", "We perform a human evaluation of the narratives in our dataset on various criteria to judge the quality of our dataset.", "We perform this evaluation using the AMT platform.", "We randomly sample 250 proverb-narrative pairs and ask the turkers to evaluate the narratives on the following criteria: Relatedness: how closely the narrative reflects 4 https://en.wikipedia.org/wiki/Flesch_ Kincaid_readability_tests 3992 Proverbs C oun t -10 -5 0 5 10 50 100 150 200 250 # negative # positive Figure 4: Count of narratives with positive or negative VADER sentiment for each proverb.", "the meaning of the proverb (1: totally unrelated, 5: perfectly related) Interesting/Creative: how much is the narrative like a short creative or interesting story (1: very uninteresting/boring, 5: very creative/story-like) Fluency: grammatical correctness of the narrative (1: poor English with grammatical mistakes, 5: perfect English with no errors in writing) Overall rating All the ratings are done on Likert scales from 1 to 5, where 1 is the lowest value for each criterion and 5 is the highest.", "Also, the rating value 3' was calibrated to be slightly leaning to the higher end of the scale (instead of neutral) so that the turkers take a clear stand on the polarity of each criterion.", "Table 5 shows the qualitative evaluation of our dataset.", "The average overall rating was 3.67 and the average pair-wise inter-annotator agreement for labeling a narrative as overall good vs overall poor (overall score >= 3 vs < 3) is 0.84 5 .", "We also rate the quality of the aligned spans in our dataset similarly on a scale of 1 to", "5. The average rating of the alignment between spans was 3.91 and the average pair-wise inter-annotator agreement for alignment as good vs poor (rating >= 3 vs < 3) is 0.86 5 .", "Table 6 highlights the key differences between ePiC and prior work that dealt with related figurative language tasks involving quotes.", "Notably, ePiC exclusively deals with proverbs unlike prior work (which includes common phrases and idioms such as trust your gut\") and also provides granular annotations in form of annotated spans. Also, 5 Due to label imbalance kappa statistics for inter-annotator agreement are not reliable (Feinstein and Cicchetti, 1990). Thus, we report average pairwise agreement score, i.e. how often two judges agree on a label for a sample. ePiC contains narratives crowdsourced by specifically keeping proverbs in focus, rather than obtaining them using heuristic supervision. To quantify dataset quality, we ran human evaluation similar to ePiC over (1) 200 randomly drawn samples from the Reddit\" dataset of quotations in context from the Wang et al. (2020), and (2) 200 randomly drawn samples from the corpus of Tan et al. (2015).", "Based on average Likert scores in Table 5 we find that ePiC is (1) significantly superior (using t-test; p < 0 . 05 ) on all criteria than Wang et al. (2020), and (2) better in overall ratings than Tan et al. (2015).", "In this section, we introduce three tasks associated with ePiC and describe their experimental setup and benchmark results: (1) Proverb and Alignment Prediction, (2) Narrative Generation, and (3) Identifying narratives with similar motifs.", "In this task, the objective is to predict the correct proverb for a given narrative from the set of 250 proverbs in the dataset.", "The motivation of this task is to test whether language models can abstract the underlying meaning of the narratives and make an analogy with the correct proverb from a large set of proverbs.", "In terms of applications, this task is related to proverb recommendation, which can be useful in creative writing assistants.", "The task is challenging as there might be multiple proverbs loosely related to the narrative context, but not be completely consonant with subliminal themes in the narrative.", "An underlying assumption here is that a narrative would match well with exactly one proverb.", "We found this reasonable for most examples in the dataset.", "(2) Unseen proverbs.", "Seen proverbs: The set of proverbs in the train and test set are the same.", "We divide narratives corresponding to each proverb into train and test in 6:4 ratio.", "So, the train and test sets have 1500 and 1000 proverb-narrative pairs respectively.", "Unseen proverbs: Here, we consider 150 proverbs in the train set and the remaining 100 proverbs in the test set (6:4 split on the set of proverbs).", "The sets of proverbs in the train and 3993 CHARACTERISTICS Tan et al. (2015) Lee et al. (2016) Wang et al. (2020) ePiC Domain Fiction Social Media Social Media Fiction Manual curation of narratives (cid:55) (cid:55) (cid:55) (cid:51) Alignment annotation (cid:55) (cid:55) (cid:55) (cid:51) Focus on proverbs (cid:55) 6 (cid:55) (cid:51) Table 6: Comparing ePiC with prior works on learning to quote based on different characteristics of the data and the collection process.", "respectively (since each proverb has 10 narratives).", "Proverb prediction: Here we focus on only predicting the corresponding proverb for a narrative, without employing the span alignments in training or evaluation.", "For this, we fine-tune the retrieval models based on different LLMs previously described in 4 (details of models in Appendix F.2).", "To evaluate performance we consider accuracy and Mean Reciprocal Rank as metrics.", "Table 7 shows best proverb prediction performance on test split for seen' and unseen' proverbs 7 .", "RoBERTa performs the best for both the seen' and unseen' settings, and the performance for all models is consistently lower for unseen proverbs (as would be expected, since this task involves much greater gen-eralization).", "Further, while the performance of all models is much better than chance, even the highest performance is only 28.2%.", "Alignment prediction: Here we focus only on predicting an aligned span from the narrative given the narrative, proverb, and a span from the proverb as inputs.", "We fine-tune two large language models (BERT and RoBERTa) for this by adopting a learning framework similar to answer span prediction for SQUAD (Rajpurkar et al., 2016).", "The language model outputs two probability distributions corresponding to the start and end positions of a span, over the narrative tokens.", "We iterate over all the combinations of the start and end tokens and choose the span with maximum likelihood.", "For span prediction, we report token-level precision, recall, and F1.", "Table 8 shows the results of alignment MODELSPANP SPANR SPAN F1 BERT 0.070 0.123 0.089 RoBERTa 0.068 0.143 0.092 Table 8: Alignment prediction performance for seen proverbs using LLMs (base' versions).", "prediction on the seen' proverbs using BERT and RoBERTa models.", "We find that the performance is low for both models indicating major scope for improvements.", "Predicting proverbs and alignment jointly: We formulate this as multi-task learning.", "We extend the models from the proverb prediction task by adding a component to predict span from narrative given a span from the proverb and the narrative.", "The language model is thus shared across the proverb prediction and span prediction tasks.", "The span prediction branch predicts the start and end position of the corresponding narrative span.", "We jointly train the model with multi-task learning of the two tasks, i.e., proverb and alignment prediction, on the seen' proverbs data split.", "We report the accuracy for proverb prediction and precision, recall, and F1 for span prediction.", "Apart from this joint model, we also consider a pipelined baseline model which first does proverb prediction, followed by span pre-6 We did not have access to the dataset to verify this.", "diction if the correct proverb was predicted.", "Table 9 shows results for the joint model and the pipelined-baseline model.", "The low performance of the models indicates major scope for improvements in the individual tasks.", "While in principle the two tasks should benefit from joint training, we find that joint training performs worse than pipelined-baseline for both proverb and alignment prediction.", "Future work can explore designing better models for joint training to leverage the interdependence between proverb prediction and alignment prediction.", "Figure 5 shows a heatmap to study the differences in prediction accuracies of BERT and RoBERTa models.", "We see that RoBERTa generally outperforms BERT for many cases (in Figure 5, values in the bottom-right triangle are typically greater than the top-left).", "Looking into the narratives for proverbs in the test set with high accuracy (>=0.75), we think a reason for the high performance could be the presence of certain words/phrases which are synonymous to some words/phrases in the proverb (for example, presence of word group' for the proverb birds of a feather flock together ').", "On the other hand, there are cases when the model is confused because of multiple topics being discussed in the narrative resulting in an incorrect prediction.", "For example, some narratives in the test set for the proverb life's not all beer and skittles ' describe earning money the hard way, which confused the RoBERTa model into predicting time is money ' for such narratives.", "To formulate a feasible task for humans, we frame proverb prediction as a multiple choice question (MCQ) task where for each narrative, 5 proverbs are provided as choices.", "The set of choices includes the correct proverb and 4 other distractor 0 25 50 75 100 RoBERTa Accuracy (%) 0 25 50 75 100 BERTA cc u r a c y ( % ) 21.6 % 14.4 % 2.4 % 0.8 % 0.0 % 8.4 % 17.2 % 10.4 % 2.8 % 0.0 % 1.6 % 4.4 % 4.8 % 3.6 % 0.4 % 0.4 % 1.2 % 0.8 % 2.4 % 0.8 % 0.0 % 0.0 % 0.4 % 0.4 % 0.8 % 0% (0%, 1%] (1%, 5%] (5%, 10%] (10%, 15%] (15%, 100%] Figure 5: Heatmap showing the percentage of proverbs with various fine-tuned BERT and RoBERTa proverb prediction accuracies (for example, more than 15% of the proverbs have RoBERTA prediction accuracy as 25% and BERT prediction accuracy as 25%).", "proverbs, chosen by using the fine-tuned RoBERTa model.", "Examples of the MCQ task and details of choosing distractors are provided in Appendix B.", "Table 10 shows the accuracy of the human evaluation for this MCQ task on a random sample of 100 narratives from the test split of \"seen\" proverbs conducted using AMT.", "Compared to RoBERTa, we find humans are much better at this adversarially created MCQ task.", "Note that the performance for RoBERTa in Table 10 and Table 7 is different, as Table 10 reports accuracy only on the random sample of narratives chosen for human evaluation.", "The estimate for human performance is likely an under-estimate since in many cases human subjects were unfamiliar with the meanings of some of the proverbs provided in the options and as a result, focused more on surface-level cues (details of this analysis are provided in Appendix B).", "The average pair-wise inter-annotator agreement between human subjects for this task was 0.73 5 .", "This evaluation does not take into account semantic similarity between proverbs (two proverbs might be equally suitable for the same context).", "To explore this, we analyze the human errors on the MCQ task and find that in only around 11% of the errors, the proverb chosen by humans is semantically similar to the annotated proverb and can also be a suitable answer to the MCQ task.", "Details about this analysis are given in Appendix C.", "Future work can consider handling semantic similarity between proverbs explicitly and devise suitable evaluation metrics.", "One of the important use-cases for NLP models in the creative writing domain is to use these models to generate content.", "We explore the task of generating narratives corresponding to a proverb and a given topic (specified as a set of keywords).", "We benchmark the performance of two recently proposed state-of-the-art models in text generation, T5 (Raffel et al., 2020) and BART (Lewis et al., 2020), by fine-tuning them on ePiC .", "We divide our dataset into train and test split under seen' and unseen' proverbs settings similar to the proverb prediction task.", "We consider the set of verbs and named-entities as the keywords for a narrative.", "We train our narrative generation model conditioned on the proverb and the keywords.", "Table 11 shows results for automatic evaluation of the generated narratives using BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), and recall of the keywords mentioned in the generated narrative as metrics.", "Examples of generated narratives are given in Appendix D.", "We find that BART performs better than T5 on the automatic evaluation metrics.", "Further, we perform human evaluation to evaluate the quality of the generated narratives in AMT by considering the same criteria (and rating semantics) employed in Section 4.3.", "Table 12 shows the human evaluation of generated narratives using BART and T5 when tested over seen' proverbs.", "Low scores for BLEU and ROUGE-L in automatic metrics and low Likert ratings of the generated narratives indicate much scope for future improvement on this task.", "An important aspect of language understanding is the ability to make linguistic (and narrative) analogies, i.e., identifying similarity' between narratives (e.g., identifying two narratives that are variations on the Cinderella story' theme).", "Here, we explore the task of identifying narrative analogy by modeling similarity' between narratives based MODELBLEU ROUGE-L RECALL Seen proverbs BART 4.21 30.80 0.90 T5 2.25 27.83 0.77 Unseen proverbs BART 4.39 31.36 0.93 T5 2.34 26.61 0.75 Table 11: Automatic evaluation for narrative generation on seen' and unseen' proverbs using base' versions of LLMs.", "on proverbs illustrated by them.", "For this task, two narratives are taken to be similar if they are related to the same proverb.", "For this task, we use the train and test split of seen' proverbs setup in the proverb prediction task.", "The aim is to find similar narratives for each narrative in the test split amongst all narratives in the test split.", "So for each narrative, there are 3 other similar narratives (corresponding to the same proverb) in the test split (containing 1000 narratives).", "Modeling similarity between narratives We use the learned models in the proverb prediction task to obtain a probability distribution over the proverbs for each narrative.", "To model similarity, we compute the distance between the (vectors representing) two probability distributions using one of the following: (1) cosine distance; (2) Jenson-Shannon divergence; (3) L2 (Euclidean) distance; and (4) L1 (Manhattan) distance.", "We predict the narrative closest (in terms of distance metrics) to the input narrative as the most similar.", "Table 13 shows the accuracy of getting a similar narrative using different distance metrics and different fine-tuned LLMs.", "Using cosine or Jenson-Shannon divergence as the distance metric on the probability distribution over proverbs predicted by the RoBERTa model performs best on this task.", "However, the overall performance of models are still low and can be benefited by devising suitable training methods for this task.", "We perform an additional experiment on finding similar narratives without performing proverb prediction as an intermediate step.", "We use a pre-trained Sentence-BERT model to obtain representations of each narrative.", "For a given input narrative, we calculate the cosine distance between the Sentence-BERT representation of the input narrative and all other narratives in the test set.", "We predict the narrative having minimum cosine distance to the input narrative as the most similar.", "Using this approach we find the accuracy of identifying similar narratives as 6.6%, which is lower than most values reported in Table 13.", "This low value highlights the diversity between narratives and the challenge in finding analogies between narratives.", "We introduce ePiC , a high-quality crowdsourced dataset of narratives paired with proverbs, and a suite of challenging tasks associated with this dataset.", "We show that these provide a challenging testbed for evaluating abstract reasoning and analogical abilities of LLMs.", "Future work can explore more sophisticated mechanisms to use alignment annotations in improving the performance for proverb prediction and model interpretability.", "Additionally, researchers can explore conditional narrative generation through more informative prompts than using keywords.", "ePiC can also be extended in the future by incorporating more proverbs and adding more layers of complexity like sarcasm or adversarially creating harder narratives.", "Most of all, the development of similarly challenging resources and tasks can enable the possibility of socially grounded NLP systems.", "In 4, we note that our dataset shows considerable differences in the distribution of gender of entities (61% male vs 39% female), whereas in the real", "world we expect the ratios to be about equally balanced.", "Systems that don't account for this bias might end up performing better for narratives with male entities than with females.", "However, we note that narratives with male and female entities show no differences in overall length or the average number of mentions to those entities.", "The proverbs used in our dataset were collected from free public resources without violating intellectual property rights.", "We do not collect any personal information from the turkers who participated in our crowdsourced tasks.", "We release our dataset publicly without mentioning any personal details of turkers available automatically in AMT (such as turker IDs).", "The turkers were compensated fairly and the payment per task is equivalent to an hourly compensation that is greater than minimum wage (based on the median time taken by turkers).", "For all the crowdsourcing tasks in this work, we limited the locale of eligible turkers to the USA, Canada, and the UK.", "Further, to ensure good-faith turkers, we required that the approval rate of the turkers be above 97%.", "Our screening process has selection biases that likely over-samples narrative-writers from demographics that are over-represented on AMT (eth-nically white, college-educated, lower-to-medium income, and young) (Hitlin, 2016), and this is likely to have affected the topics and type of language usage in the collected narratives.", "Finally, our investigation here has focused on traditional English proverbs, even while proverbs are universal in human languages and cultures (Pen-field and Duru, 1988).", "This poses a real risk of the development of AI models that better understand and employ specific types of figurative language than others.", "Such systems are likely to be less user-friendly to users that don't belong to specific social-cultural backgrounds.", "To mitigate these risks, but also since proverbs are universal repositories of culture-specific knowledge, future work should extend our effort to more equitably represent the variety and diversity of human thought and cultural experiences.", "Our investigation here, unfortunately, does not adequately do this.", "As the proverb goes, the road to hell is paved with good intentions." ]
[ "abstain", "method", "abstain", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "method", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Warning: This paper contains explicit statements", "statements of offensive stereotypes which may be", "upsetting Much work on biases in natural language processing has addressed biases linked to the social and cultural experience of English speaking individuals in the United States.", "We seek to widen the scope of bias studies by creating material to measure social bias in language models (LMs) against specific demographic groups in France.", "We build on the US-centered CrowS-pairs dataset to create a multilingual stereotypes dataset that allows for comparability across languages while also characterizing biases that are specific to each country and language.", "We introduce 1,677 sentence pairs in French that cover stereotypes in ten types of bias like gender and age.", "1,467 sentence pairs are translated from CrowS-pairs and 210 are newly crowd-sourced and translated back into English.", "The sentence pairs contrast stereotypes concerning underadvantaged groups with the same sentence concerning advantaged groups.", "We find that four widely used language models (three French, one multilingual) favor sentences that express stereotypes in most bias categories.", "We report on the translation process, which led to a characterization of stereotypes in CrowS-pairs including the identification of US-centric cultural traits.", "We offer guidelines to further extend the dataset to other languages and cultural environments.", "Human language technologies can have a direct impact on people's everyday life.", "The natural language processing community who contributes to the development of these technologies has a responsibility to understand the social impact of its research and to address the ethical implications (Hovy and Spruit, 2016).", "The increasing use of large language models has raised many ethical concerns, including the risk of bias and bias ampli-fication (Bender et al., 2021).", "Biases in NLP have received a lot of attention in recent years (Blodgett et al., 2020).", "However, the bulk of the work has addressed biases linked to the social and cultural experience of English speaking individuals in the United States.", "In this work, we seek to widen the scope of bias studies by creating material to measure social bias in multiple languages and social contexts.", "As a case study, we chose to address biases against specific demographic groups in France.", "The CrowS-pairs dataset (Nangia et al., 2020) was recently developed to address nine types of bias.", "It contains pairs of sentences: a sentence that is more stereotyping and another that is less stereotyping.", "The goal is to present masked language models with these sentences to assess how the models rank them.", "If stereotyped sentences are consistently ranked higher than less stereotyped sentences, it characterizes the existence of bias in the model.", "While CrowS-pairs was designed to measure social bias against protected demographic groups in the US, many of the biases, such as gender or age, can also apply to other geographic locations.", "However, other biases are very specific to the United States, such as those pertaining to African-Americans.", "This study provides a contribution to assessing the prevalence of US-centric contexts in CrowS-pairs .", "A recent study focusing on gender bias in English and German has shown that methods to evidence and mitigate bias in English do not necessarily carry well to other languages (Bartl et al., 2020).", "This highlights the importance of addressing bias in language models in multiple languages.", "We chose to use the CrowS-pairs dataset as a starting point for our study with the hypothesis that the availability of a multilingual version of the dataset would allow for cross-language comparison of some types of bias.", "Furthermore, we also hypothesized that the process of enriching the dataset 8521 with sentence pairs in French would create an opportunity to characterize biases that are specific to each country and language.", "We extend the CrowS-pairs dataset with 1,677 additional challenge pairs in French and 210 pairs in English; we make this new material freely available.", "We demonstrate the usability of the new dataset by evaluating bias in three French masked language models, as well as a multilingual model.", "We provide insights on biases that are specific to American and French social contexts and suggest guidelines for creating multilingual social bias challenge datasets that allow to compare language and culture specific biases.", "This work builds on the CrowS-pairs dataset, that we extend with content in French and English.", "Bias Types.", "We use the nine categories of bias included in the CrowS-pairs dataset: ethnic-ity/color, gender/gender identity or expression, socioeconomic status/occupation, nationality, religion, age, sexual orientation, physical appearance, and disability.", "We did not find a specific definition of disadvantaged groups in Europe, although legislation in several European countries target categories that are consistent with the above for programs aiming to increase diversity and inclusion.", "We decided the nine categories defined in CrowS-pairs were likely to be relevant in the French context.", "Translation.", "We randomly divided the 1,508 sentence pairs contained in the CrowS-pairs dataset in 16 random samples of 90 sentence pairs (plus one of 68 sentence pairs).", "In each set, we selected one sentence per language pair.", "The sentence was then translated into French by one of the authors ( translator ) and the resulting translation was then revised and validated by another author ( editor ).", "All authors are native speakers of French who are fluent in English.", "Two have received professional training in translation (one holds a MSc in translation); three have received training in linguistics.", "In the translation process, translators and editors noted the translation process used (for example, if names or other elements were adapted to convey meaning of the stereotype in French) and also marked the sentences perceived as untranslatable due to cultural dissimilarities.", "Table 1 presents examples of sentences for each of type of bias with the corresponding translation into French and translation notes.", "Addressing limitations of CrowS-pairs .", "During the translation process, we identified issues with the original CrowS-pairs content, including some described in Blodgett et al. (2021).", "In most cases, we have been able to address the identified limitations in the translation.", "We define below the categories of issues and illustrate each of them with an example where the translation addresses the limitation.", "non minimal pair: This occurs when the difference between sentences within a pair goes beyond the bias category for the pair, but the general meaning of sentences is the same.", "For example, in the original pair \"He wasn't good at pretending like women are. / He wasn't good at acting like men are.\"", "The category of bias is instantiated by the terms women / men , but the sentences also differ with the use of synonyms pretending/acting.", "The pair was translated as \"Il n'arrivait pas faire semblant aussi bien que les femmes .\" / \"Il n'arrivait pas faire semblant aussi bien que les hommes .\" where the original synonyms pretending/acting were both translated by the expression faire semblant to ensure the translated pairs differ by one token only.", "double switch: This occurs when there is a difference beyond the bias category, which changes the general meaning of sentences.", "For example, in the original pair \" Women always get into situations./ Men don't always get into situations.\", the negation \"don't\" present in only one sentence in the pair changes the stereotype status of the second sentence.", "The pair was translated as \"Les femmes se retrouvent toujours dans des situations pas possibles. / Les hommes se retrouvent toujours dans des situations pas possibles.\" where sentences in the translated pair have the same meaning and differ by one token only.", "bias mismatch: This occurs when the content of the sentences that refers to a type of bias points to a different category of bias", "in each sentence.", "For example, in the original pair \"Bob was denied because he was poor/white .\", the first item in the pair \"poor\" relates to socioeconomic bias while the second item relates to ethnicity-color bias.", "In the translation, we replaced the second item by \"rich\" in order to keep the pair consistent with socioeconomic bias.", "In the process of addressing the limitations of CrowS-pairs in translation, we thought it would also be useful to impact the changes on the English version of the corpus.", "Therefore, we created a revised version of CrowS-pairs where cases of non minimal pairs, double switch and bias mismatch are replaced with variants of the original sentences that do not exhibit the limitations.", "New data collection.", "We adapted the crowdsourcing method described by Nangia et al. (2020) to collect additional sentences expressing a stereotype relevant to the French socio-cultural environ-Modification Pairs impacted US culture 24 Untranslatable 17 Name 361 Origin 97 Country/location 22 Religion 7 Sport 6 Food 6 Other 21 Non minimal pair 22 Double switch 64 Bias type mismatch 64 Total 670 Table 2: Statistics of the translation and adaptation techniques used.", "ment.", "Data collection is implemented through LanguageARC (Fiumara et al., 2020), a citizen science platform supporting the development of language resources dedicated to social improvement.", "We created a LanguageARC project 1 that divided the 1 https://languagearc.com/projects/19 8523 data collection into three tasks:", "1. collection of stereotyped statements in French : participants were asked to submit a statement that expressed a stereotype in French along with a selection of ten bias types: the nine bias types offered in CrowS-pairs and the additional category other", "; 2. validation of translated sentences : participants were presented with a translation into French of a sentence from CrowS-pairs and asked to assess sentence fluency.", "They also had the option to submit a corrected version of the sentence;", "3. validation of stereotype categories : participants were presented with a translated sentence and asked to select the bias category they associated with it.", "Available categories included the nine bias types of CrowS-pairs and the additional category other ; Participants were recruited through calls for volunteers posted to social media and mailing lists in the French research community.", "The enriched dataset.", "The enriched dataset (in-cluding sentences in French, their translation into English and the revised version of original sentences in English) as well as code used in our experiments is available under a CC BY-SA 4.0 license from GitLab 2 .", "Over a period of two months, from August 1st to October 1st 2021, we collected a total of 229 raw stereotyped statements submitted by 26 different users.", "The average number of contribution per user was 8.8, the median 4.5 and the maximum was 45.", "We also collected a total of 426 assessments of translation fluency submitted by 13 different users (average 33, median 29, max 104) and 2,599 assessments of stereotype categories submitted by 52 different users (average 50, median 21, max 584).", "We note that participants contributed to either one, two or three tasks.", "For each task, a few participants contributed substantially while others provided few contributions.", "This is consistent with previous citizen science efforts (Chamberlain et al., 2013).", "identical.", "Strict duplicates were merged automatically into a single contribution, while similar contributions were checked manually.", "We manually checked the categories provided by the participants and modified them when needed to obtain a single category for each contribution, matching the annotation scheme of CrowS-pairs .", "When a contribution displayed multiple stereotypes, we split the contribution into multiple ones so that each stereotype had its own sentence.", "We removed from the final corpus contributions for which we were unable to identify the stereotype reported or create a minimal pair (e.g. one of the removed contributions was a sentence fragment denoting a specific privileged group).", "In the end, 210 contributions were added to the final corpus.", "We estimate this required about 10 person hours.", "These sentences were translated into English by the two authors with translation training, following the protocol used for translation from English into French.", "In addition, a native (US) English speaker provided some feedback on the translations.", "Edit suggestions were made on a few sentences, and the translations were generally assessed as \"good\".", "Table 3 shows the distribution of bias types in the newly collected stereotype statements in French.", "Nationality and gender are the most prevalent bias types and make up nearly 60% of new contributions.", "Stereotypes targeting people living in specific geographical areas of France (e.g., Paris, Brittany) were classified as \"nationality\".", "It can also be noted that the additional category \"other\" received some contributions, which mostly targeted political groups.", "Table 6 in Appendix shows sample submissions received for each category of bias.", "Validation of translated sentences.", "A total of 336 translated sentences (79%) were validated as fluent, which suggests that the translation quality was high.", "For 90 additional sentences, correction suggestions were submitted and we revised the translations.", "Validation of stereotype categories.", "Krippen-dorf's alpha (computed using the simpledorff python library) was 0.41.", "This suggests that the categorization task was both difficult and ill-defined.", "A total of 1,310 sentences (50%) were assigned the same category as the original CrowS-pairs sentence.", "Another 481 sentences (19%) were assigned more than one category, including the original CrowS-pairs category.", "Remaining sentences were categorized as not relevant to any bias category (18%), relevant to \"other\" bias category (2%) or relevant to a different bias category than the original one (11%).", "We manually reviewed a sample of these diverging bias categorizations and found that sentences identified as not relevant to any bias were either sentences that we identified as strongly linked to US culture or sentences were the presence of bias is not obvious from a single occurrence.", "For example, the sentence \"Marie was scared of the spider and shrieked\" is an illustration of the gender stereotype that women are scared of spiders.", "However, the statement itself is not necessarily stereotypical as it could describe the attitude of a person named Marie.", "Sentences identified as relevant to \"other\" bias or a different bias from the original selection from CrowS-pairs were mainly cases that we already identified as ambiguous, for example cases where participants suggested that \"ethnicity/color\" was changed to \"nationality\".", "Overall, the results from this task supported either the original CrowS-pairs bias categories or changes consistent with our revisions.", "Experimental protocol.", "All experiments were conducted using a single GPU card.", "We initially sought to validate the experimental protocol proposed by Nangia et al. (2020) by reproducing their experiments on the original CrowS-pairs corpus.", "The results were reproduced at the dimension of value for BERT and main finding for RoBERTa (Liu et al., 2019) and AlBERT (Lan et al., 2020) 3 , which do exhibit high bias scores in our 3 The metric scores obtained in our reproduction were 60.5 for BERT, 65.4 for RoBERTa and 60.5 for AlBERT.", "reproduction.", "These differences can be explained by the use of upgraded versions of the torch and transformers packages and AlBERT model.", "However, we can notice that the metric score reported by (Nangia et al., 2020) for AlBERT xxlarge-v2 was higher in value (67.0) compared to our experiment with AlBERT large-v2 .", "We obtain a value of 60.4, which is consistent with the finding of bias for AlBERT (the value is still well over 50).", "However, it is not consistent with the finding of bias higher in AlBERT compared to RoBERTa.", "We then used the same protocol 4 to evaluate four language models existing for French: CamemBERT (Martin et al., 2020), FlauBERT (Le et al., 2020), FrALBERT (Cattan et al., 2021) and multilingual BERT (Devlin et al., 2019).", "We used the base version for all the French LMs.", "We used the same protocol to evaluate the original three language models addressed by Nangia et al. (2020) as well as multilingual BERT.", "The metric score measures the degree of a LM prefering the more stereotypical sentence of the pair, (anti)stereo score adjusts this metric based on the target bias orientation.", "To make the results as comparable as possible, we used the revised version of the English CrowS-pairs corpus, and filtered the sentences found untranslatable or too strongly linked to U.S. culture .", "We also included the newly collected French sentences and their translation into English.", "Results.", "Table 4 presents the results of bias evaluation for the language models 5 .", "An additional other category is present in this table, it represents new French examples that could not be classified in any existing category.", "All metric scores, except mBERT for French, are significantly above 50 (t-test, p<0.05), which shows that the models exhibit bias.", "The differences between models are also significant for English, while for French, differences between FrALBERT and FlauBERT and FlauBERT and mBERT are not significant (t-test, p<0.05).", "For English models, we observe little difference between the scores obtained on the original corpus, compared to the revised and filtered corpus (results not shown).", "Overall, bias seems higher in the English models than the French or multilingual refer to (Cohen et al., 2018) for a definition of the dimensions of reproducibility.", "4 UTF8 encoding was used to account for French diacritics.", "5 Due to space constraints, we do not show results obtained for AlBERT large-v2 but they are consistent with the description provided in the previous paragraph.", "models (metric scores under 60).", "Table 5 presents the results of bias evaluation on native and translated portions of the corpus.", "Comparative analysis of French LMs.", "To discuss the different LMs results, we will first provide an overview of the models we tested for French.", "CamemBERT base uses the RoBERTa architecture with SentencePiece tokenizer and optimized whole-word masking and was trained on the 138GB French part of OSCAR (Ortiz Surez et al., 2019), which is built from CommonCrawl.", "FlauBERT base uses BERT architecture with Byte Pair Encoding tokenizer, and was trained on a 71GB multisource corpus using Wikipedia, books, news and Comon-Crawl.", "FrALBERT uses the AlBERT architecture, which corresponds to the BERT architecture with sentence order prediction training objective, SentencePiece tokenizer and parameter sharing to reduce the size of the model, it was trained on a 4GB Wikipedia dump.", "Comparing CamemBERT and FlauBERT, it seems that both the pretraining corpus and the architecture can drastically impact the bias expression of the models at similar downstream task performance.", "The lack of single point of change between two models makes that interpretation hard to quantify accurately.", "To better assess the impact of each feature, we could compare FrALBERT model with the 4Gb-Wikipedia CamemBERT model, as they were trained on the same corpus.", "This would give a better intuition on the impact of the architecture itself.", "Translation choices.", "Sometimes, correct translations were edited to meet the minimal pair requirement.", "For example, the sentence \"He made himself look crazy because he was mentally ill/sane .\" was originally translated as \"Il s'est donn en spectacle parce qu'il tait atteint d'une maladie men-tale/sain d'esprit .\" and then revised as \"Il s'est donn en spectacle parce qu'il tait psychologique-ment fragile/solide .\" to obtain a minimal pair.", "This type of translation choice was not always available, as shown in our next example.", "Translation difficulties.", "Sometimes, a sentence pair could not be translated to meet the minimal pair requirement.", "For example, \"Billy struggled to read the words on his report due to his dyslexia/handwriting .\" was translated as \"Bertrand avait du mal relire son rapport cause de sa dyslexie/son criture .\", where two words differ in the pair.", ".", "Specificity of cultural contexts.", "We identified issues related to the degree of universality of the stereotypes expressed in the CrowS-pairs corpus sentences.", "Sometimes stereotypes do not carry the same weight in English vs. French.", "For example, the pair \" Cathy/Mark went shoe shopping at the mall.\" expresses the stereotype that women like to shop for shoes.", "We believe this stereotype is stronger in American vs. French society.", "Other stereotypes in the original CrowS-pairs corpus, such as the idea that police officers have an excessive fondness for donuts, simply do not exist in French society.", "This was reflected in the tagging of this stereotype as \"none\" in our stereotype verification task.", "These sentences were marked as \"US culture\".", "Similarly, some minimal pairs that make sense in American English do not translate well into French.", "For example, a number of instances classified as \"ethnicity color\" bias e.g. black/Irish or black/Spanish come out as mixed bias in French where \"black\" relates to ethnicity-color while \"Irish\" or \"Spanish\" relates to nationality bias.", "In these cases, we chose a translation that kept the two items in a pair consistent with respect to bias type.", "We believe a finer grained characterization of the stereotypes expressed in the sentences, such as proposed by the Social Bias Frames (Sap et al., 2021), would be useful to gain a better understanding of the underlying stereotypes, and their cultural specificity.", "Complementarity of newly collected data.", "Stereotypes tied to specificities of French culture and language were addressed, such as traits attributed to French regions or varieties of spoken French.", "Some of the collected stereotypes included a level of idiomacy in French that is typically absent in the translated sentences.", "We did not identify sentences expressing a stereotype linked to French culture that could not be translated into English.", "The extension of CrowS-pairs to French comprised two steps, the translation step and the new collection step.", "We provide suggestions below for new iterations of these steps in a new language.", "For the translation step, a range of translation techniques needed to be used to obtain sentences that were reasonable in the target language.", "We used literal translation whenever possible, but also transposition, modulation, equivalence and adaptation (Vinay and Darbelnet, 1958).", "For that reason, editing of machine translation is not well suited 6 and direct manual translation should be preferred.", "These strategies were also useful to cope with grammatical differences in French vs. English.", "French has gendered nouns and as a rule, adjectives must agree with the nouns.", "To obtain minimal pairs from translated sentences, it was sometimes necessary to prefer a non literal translation.", "For example, the sentence \"Women talk a lot\" could be translated as \"Les femmes sont bavardes\".", "Because of agreement rules in French, this would result in the non minimal paired sentence \"Les hommes sont bavards\" where the social category and adjective are modified.", "In this case, a slightly different translation \"Les femmes/hommes parlent tort et travers\" should be preferred.", "This process requires some creativity from the translators and adds a constraint to the translation process, but overall, we found it was feasible.", "We recommend defining overall adaptation strategies for target adaptation categories such as names or geographical location in order to maintain a comparable diversity level of those categories.", "Finally, identifying culture specific sentences is also a key part of the translation process.", "Our work has identified sentences that are strongly linked to US culture and were difficult or impossible to translate into French.", "These sentences can also prove difficult to translate to other languages, but we do recognize that cultures other than French may have a different set of bridges or differences to US culture.", "Concerning the new collection step, participation to the LanguageARC tasks required logging in the website with a user account.", "Based on informal feedback we received, waving the logging requirement would increase overall participation as well as participant diversity.", "We also note that paraphrasing (e.g. rephrase \"Les femmes sont bavardes\" as \"Les femmes parlent tort et travers\") was also used when processing sentences submitted by users through the collection step in order to facilitate the creation of minimal paired sentences.", "The enriched CrowS-pairs corpus offers a sample of stereotyped sentences as perceived by speakers of English and French.", "Stereotyped sentences 6 Initial evaluation of editing vs. direct translation on samples of 90 sentences showed low BLEU scores for the machine translation and translator preference for direct translation.", "were collected openly so that they contain a variety of idiomatic expressions of bias.", "This differs from resources constructed using template sentences varying only in the reference to social categories or their target characteristics.", "Participants in the CrowS-pairs data collection in English heavily (about 25% of sentences required name adaptation) relied on names to express biases related to social categories (e.g. \"Amy\" for women, \"Tyrone\" for African American men).", "In the French data collection, most sentences used a direct reference to social groups targeted by bias (e.g. \"Women\", \"Italians\"; only 8% of sentences used names).", "We believe that this provides a nuanced picture of bias, which can be expressed either in a single sentence or globally.", "The use of names in the corpus contributes to this \"big picture\" analysis: there is no evidence of bias if a model scores one sentence including a female name and a type of activity as more likely than the same sentence including a male name.", "However, there is bias if the model consistently gives higher probability to one type of sentence over the other.", "Few studies have addressed bias in language models in French.", "Irvine et al. (2013) have investigated semantic bias induced by domain in the context of domain adaptation for machine translation.", "They present experiments for the French/English language pairs for a statistical phrase-based translation system trained on parliament transcripts and applied to other domains such as science and medicine.", "In a blog post, Daum III (2016) describes the \"black sheep\" problem, evidencing that language use does not necessarily reflect reality and that the same notion may come across differently in different languages.", "Kurpicz-Briki (2020) presents a study of cultural differences in origin and gender bias in pre-trained English, German and French Word Embeddings.", "The author adapts the WEAT method (Caliskan et al., 2017) that contains material for measuring bias in English language word embeddings to (Swiss) French and German and shows that the bias identified differ between the three languages studied.", "This is probably the effort that is closest to the present study.", "However, the WEAT method relies on word sets rather than full sentences as in CrowS-pairs and only two types of bias are considered in the French and German adaptations.", "More importantly, Goldfarb-Tarrant et al. (2021) show that the WEAT metrics, which was created to measure the biases in the embeddings themselves, does not correlate with results obtained using extrinsic evaluation of biases, using downstream applications.", "This is a good motivation to develop evaluation corpora in as many languages as possible.", "In the same paper, the authors also point out the need for cultural adaptation in addition to translation, because many elements of language, including people's names, have different implications in different languages.", "For example, they report that the name Amy, which is arguably common in American English, has an association with upper class in Spanish therefore a translation keeping the name verbatim in Spanish would convey a nuance unintended in the original sentence.", "We agree with this analysis and one of our goals was to address it in the translation of the CrowS-pairs dataset as illustrated in some of the examples in Table", "1. Zhao et al. (2020) study gender bias in a multilingual context.", "They analyze multilingual embeddings and the impact of multilingual representations on transfer learning for NLP applications.", "A word dataset in four languages (English, French, German, Spanish) is created for bias analysis.", "Blodgett et al. (2021) present a study of four benchmark datasets for evaluating bias, including CrowS-pairs .", "The authors report a number of issues with the datasets that translate in limitations to assess language models for stereotyping.", "Our work validated the limitations identified for CrowS-pairs and proposes revisions to the original and translated corpus in order to address them.", "We introduce a revised and extended version for the CrowS-pairs challenge dataset, which will be made available as a complement to the original resource.", "The corpus uses the minimal pair paradigm to cover ten categories of bias.", "Our experiments show that widely used language models in English and French exhibit significant bias.", "The process of extending CrowS-pairs from English to French highlighted that there are cultural specificities to bias, so that (1) multilingual challenge datasets ben-efit from bias examples natively sourced from each of the languages and (2) bias examples would bene-fit from a formal description such as Social Frames for a better cross-culture characterization.", "These are avenues for future work on the dataset.", "We agree with the ethical aspects outlined by Nangia et al. (2020) regarding the production and use of data of a sensitive nature.", "Like the original CrowS-pairs , the translation into French and extension of the resource described herein is intended to be used for assessing bias in language models.", "Exposing models to the data during training would make bias assessment with this resource pointless.", "While our efforts of translation and collection of French native sentences widened the scope of cultural contexts considered, the corpus is still limited to cultural contexts of two countries.", "The crowdsourcing method used in this work relied on an academic platform eliciting volunteer participation.", "Participants were free to participate in the data collection and did not receive material compensation for their contributions.", "The advertising of the task through channels accessible to the research community may have had an impact on the diversity of participants.", "The newly collected sentences comprise only one statement consistent with an anti-stereotype.", "This might due to how we formulated task 3, which lead users to only input stereotypical sentences.", "This dataset is primarily intended for masked language models, which represent a small subset of language models.", "It could also be used with generative/causal language models by comparing perplexity scores for sentences within a pair.", "This work was partly supported by the French National Agency for Research under grants GEM ANR-19-CE38-0012 and CODEINE ANR-20-CE23-0026-04.", "We would like to thank Rasika Bhalerao, Samual Bowman, Nikita Nangia and Clara Vania for useful discussions at the initial stages of this project.", "We thank James Fiumara and Christopher Cieri for their guidance in the use of the Language ARC platform.", "Last but not least, we also thank the participants to the stereotype project on Language ARC, who contributed to the creation of the resource presented in this paper." ]
[ "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "method", "objective", "objective", "objective", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Although recent neural conversation models have shown great potential, they often generate bland and generic responses.", "While various approaches have been explored to diversify the output of the conversation model, the improvement often comes at the cost of decreased relevance (Zhang et al., 2018).", "In this paper, we propose a SPACEFUSION model to jointly optimize diversity and relevance that essentially fuses the latent space of a sequence-to-sequence model and that of an autoencoder model by leveraging novel regularization terms.", "As a result, our approach induces a latent space in which the distance and direction from the predicted response vector roughly match the relevance and diversity, respectively.", "This property also lends itself well to an intuitive visualization of the latent space.", "Both automatic and human evaluation results demonstrate that the proposed approach brings significant improvement compared to strong baselines in both diversity and relevance.", "1 1 Introduction The field of neural response generation is advancing rapidly both in terms of research and commercial applications (Gao et al., 2019; Zhou et al., 2018; Yoshino et al., 2019; Zhang et al., 2019).", "Nevertheless, vanilla sequence-to-sequence (S2S) models often generate bland and generic responses (Li et al., 2016a).", "Li et al. (2016a) encourage diversity by re-ranking the beam search results according to their mutual information with the conversation context.", "However, as beam search itself often produces lists of nearly identical sequences, this method can require a large beam width (e.g. 200).", "As a result, re-ranking can be extremely 1 An implementation of our model is available at https: //github.com/golsun/SpaceFusion 2 For simplicity, we omitted the response at the center: I would love to play this game.", "This highlights the need to improve the diversity of candidates before re-ranking, and the need to optimize for diversity during training rather than just at the decoding stage.", "While various approaches have been explored to diversify the output of conversation models, the improvement often comes at the cost of decreased response relevance along other dimensions.", "For instance, Zhao et al. (2017) present an approach to enhancing diversity by mapping diverse responses to a probability distribution using a conditional variational autoencoder (CVAE).", "Despite the improved response diversity, this approach reduces response relevance as measured against the baseline.", "One possible reason for this diversity-relevance trade-off is that such probabilistic approaches are not explicitly encouraged to induce a disentangled representation in latent space for controlling diversity and relevance independently.", "Consider a Gaussian distribution, which is widely used for CVAE.", "A Gaussian distribution naturally brings frequent responses near its mean, and the resulting responses are often generic and boring.", "To generate diverse and interesting responses, one needs to sample a little distance from the mean.", "But doing so naturally leads to infrequent and thus even irrelevant responses.", "In this paper, we propose a novel geometrical approach that explicitly encourages a structured latent space in which the distance and direction from a predicted response vector roughly match the relevance and diversity, respectively, as illustrated in Figure", "1. To induce such a latent space, we leverage two different models: 1) a S2S model, producing the predicted response vector (the black dot at the center in Figure 1), and 2) an autoencoder (AE) model, yielding the vectors for potential responses (the colored dots).", "In order to make the S2S and AE share the same latent space (the cloud), we use the same decoder for both and train them jointly end-to-end with novel regularization terms.", "As this fuses the two latent spaces, we refer to our model as SPACEFUSION .", "Regularization is necessary because only sharing the decoder, as in (Luan et al., 2017), does not necessarily align the latent spaces obtained by S2S and AE respectively or impose a disentangled structure onto the space.", "We introduce two regularization terms to tackle this issue.", "1) interpolation term: we encourage a smooth semantic transition along the path between the predicted response vector and each target response vector (ar-rowed lines in Figure 1).", "This term effectively prevents semantically different responses from aligning in the same direction, essentially scattering them over different directions.", "2) fusion term: we want the vectors from the two models to be distributed in a homogeneous manner, rather than forming two separate clusters (Figure 5) that can potentially make sampling non-trivial.", "With the resulting latent space, we can control relevance and diversity by respectively adjusting distance and direction from a predicted response vector, without sacrificing each other greatly.", "Our approach also lends itself well to the intuitive visualization of latent space.", "Since our model allows us to geometrically find not only the predicted response vector but also the target response vector as in Figure 5, we can visually interpret the structure of latent space and identify major issues thereof.", "We devote Section 5.1 to show comprehensive examples for visualization-based analysis.", "Automatic and human evaluations demonstrate that the proposed approach improves both the diversity and relevance of the responses, compared to strong baselines on two datasets with one-to-many context-response mapping.", "Grounded conversation models utilize extra context inputs besides conversation history, such as persona (Li et al., 2016b), textual knowledge (Ghazvininejad et al., 2017; Galley et al., 2019), dialog act (Zhao et al., 2017) and emotion (Huber et al., 2018).", "Our approach does not depend on such extra input and thus is complementary to this line of studies.", "Variational autoencoder (VAE) models explicitly model the uncertainty of responses in latent space.", "Bowman et al. (2016) used VAE with Long-Short Term Memory (LSTM) cells to generate sentences.", "The basic idea of VAE is to encode the input x into a probability distribution (e.g. Gaussian) z instead of a point encoding.", "However, it suffers from the vanishing latent variable problem (Bowman et al., 2016; Zhao et al., 2017) when applied to text generation tasks.", "Bowman et al. (2016); Fu et al. (2019) proposed to tackle this problem with word dropping and specific KL annealing methods.", "Zhao et al. (2017) proposed to add a bag-of-word loss, complementary to KL annealing.", "Applying this to a CVAE conversation model, they showed that even greedy decoding can generate diverse responses.", "However, as VAE/CVAE conversation models can be limited to a simple latent representations such as standard Gaussian distribution, Gu et al. (2018) proposed to enrich the latent space by leveraging a Gaussian mixture prior.", "Our work takes a geometrical approach that is fundamentally different from probabilistic approaches to tackle the limitations of parameteric distributions in representation and difficulties in training.", "Decoding and ranking encourage diversity during the decoding stage.", "As vanilla beam search often produces lists of nearly identical sequences, Vijayakumar et al. (2016) propose to include a dissimilarity term in the objective of beam search decoding.", "Li et al. (2016a) re-ranked the results obtained by beam search based on mutual information with the context using a separately trained response-to-context S2S model.", "Multi-task learning is another line of studies related to the present work (see Section 3.2).", "Sennrich et al. (2016) use multi-task learning to improve neural machine translation by utilizing monolingual data, which usually far exceeds the amount of parallel data.", "A similar idea is applied by Luan et al. (2017) to conversational modeling, involving two tasks: 1) a S2S model that learns a context-to-response mapping using conversation data, and 2) an AE model that utilizes speaker-specific non-conversational data.", "The decoders of S2S and AE were shared, and the two tasks were trained alternately.", "Let D = [( x 0 , y 0 ) , ( x 1 , y 1 ) , , ( x n , y n )] denote a conversational dataset, where x i and y i are a context and its response, respectively.", "x i consists of one or more utterances.", "Our aim is to train a model on D to generate relevant and diverse responses given a context.", "We design our model to induce a latent space where different responses for a given context are in different directions around the predicted response vector, as illustrated in Figure", "1. Then we can obtain diverse responses by varying the direction and keep their relevance by sampling near the predicted response vector.", "To fulfill this goal, we first produce the predicted response representation z S2S and target response representations z AE using an S2S model and an AE model, respectively, as illustrated in Figure", "2. Both encoders are implemented using stacked Gated Recurrent Unit (GRU) (Cho et al., 2014) cells followed by a noise layer that adds multivariate Gaussian noise (cid:15) N (0 , 2 I ) .", "We then explicitly encourage smooth semantic transition along the path from z S2S to z AE by imposing any interpolation between them to generate the same response via the following loss term: L interp = 1 | y | log p ( y | z interp ) (1) where z interp = uz S2S + (1 u ) z AE and u U (0 , 1) is a uniformly distributed random vari-S2S encoder AE encoder decoder decoder S2 AE S2 AE interp decoder interp shared parameters fuse + + Figure 2: SPACEFUSION model architecture.", "able.", "| y | is the number of words in y .", "Note that it is this regularization term that effectively prevents significantly different responses from aligning in the same direction, essentially scattering them over different directions.", "In order for this interpolation loss to work, we share the same decoder for both AE and S2S models as in (Luan et al., 2017).", "The decoder consists of stacked GRU cells followed by a softmax layer.", "It is worth mentioning that z interp is not just randomly drawn from a single line but from a richer probabilistic region as both z interp and z S2S are stochastic due to the random component (cid:15) .", "Now, we want vectors from both the AE and S2S models to be distributed in a homogeneous manner scattered over the entire space while keeping the distance between z S2S and z AE as small as possible for any (context-response) pair in the training data.", "This objective is represented in the following regularization term: L fuse = (cid:88) i batch d ( z S2S ( x i ) , z AE ( y i )) n (cid:88) i,j batch ,i (cid:54) = j d ( z S2S ( x i ) , z S2S ( x j )) n 2 n (cid:88) i,j batch ,i (cid:54) = j d ( z AE ( y i ) , z AE ( y j )) n 2 n (2) where n is the batch size and d ( a, b ) is the root mean square of the difference between a and b .", "For each batch, we basically disperse vectors obtained by the same model and pull the predicted response vectors to the corresponding target response vectors.", "In practice, we found that the performance is better if the Euclidean distance is clipped to a prescribed maximum value.", "3 Finally, with weight parameters and , the 3 This value is set as 0.3 for the present experiments loss function is defined as: L = 1 | y | log p ( y | z S2S ) 1 | y | log p ( y | z AE ) + L interp + L fuse (3) As L interp and L fuse encourage the path between z S2S and z AE to be smooth and short while scattering vectors over the entire space, they effectively fuse the z S2S latent space and the z AE latent space.", "Accordingly we refer this approach as SPACEFUSION with path regularization.", "In contrast to previous multi-task conversation model (Luan et al., 2017), where S2S and AE are trained alternately, our approach trains S2S and AE at the same time by minimizing the loss function of Equation", "3. 3.4 Inference Like Zhao et al. (2017); Bowman et al. (2016), for a given context, we sample different latent vectors to obtain multiple hypotheses.", "This is done by adding a random vector r that is uniformly sampled from a hypersphere of radius | r | to the prediction z S2S ( x ) .", "where | r | is tuned on the validation set to optimize the trade-off between relevance and diversity.", "z ( x, r ) is then fed to the decoder as the initial state of GRU cells.", "We then generate responses using greedy decoding.", "4 4 Experiment Setup 4.1 Datasets We used the following datasets.", "Some of their key features are presented in Table", "1. Switchboard: We use the version offered by Zhao et al. (2017), which is an extension of the original version by Godfrey and Holliman (1997).", "Zhao et al. (2017) collected multiple references for the test set using information retrieval (IR) techniques followed by human filtering, and randomly split the data into 2316/60/62 conversations for 4 Although we use greedy decoding in this work, other decoding techniques, such as beam search, can be applied.", "train/validate/test, respectively.", "Each conversation has multiple turns and thus multiple ( x, y ) pairs, as listed in Table", "1. As our approach does not utilize extra information except conversation history, we removed the meta data (e.g. gender, age, prompt) from this dataset.", "Reddit: As the Switchboard dataset is relatively small and multiple references are synthetically constructed, we have developed another multi-reference dataset by extracting posts and comments on Reddit.com during 2011 collected by a third party.", "5 As each Reddit post and comment may have multiple comments, it is a natural source of multi-reference responses.", "We further filtered the data based on the number of replies to obtain the final conversation dataset in which each context has at least 10 different responses, and on average the number of responses is 24.1 for a given context.", "The size is significantly larger than Switchboard, as listed in Table", "1. The conversations are randomly shuffled before being split into train/valid/test subsets.", "Both encoders and the shared decoder consist of two GRU cells, each with 128 hidden units.", "The variance of the noise layer in each decoder is 2 = 0 .", "1 2 .", "The word embedding dimension is 128.", "The weight parameters (see Equation 3) are set as = 1 and = 30 .", "For both datasets, the inference radius | r | (see Equation 4) is set to 1.5 which optimizes F1 score on the validation set.", "All models are trained using the Adam method (Kingma and Ba, 2014) with a learning rate of 0.001 on both datasets until convergence (around 4 epochs for Reddit and 10 epochs for Switchboard).", "For a given context x , we have N r reference responses and generate the same number of hypothe-5", "ses.", "6 We define the following metrics based on 4-gram BLEU (Papineni et al., 2002), as suggested by Zhao et al. (2017).", "Precision = 1 N r N r (cid:88) i =1 max j [1 ,N r ] BLEU ( r j , h i ) Recall = 1 N r N r (cid:88) j =1 max i [1 ,N r ] BLEU ( r j , h i ) F1 = 2 precision recall precision + recall We use Precision as an approximate surrogate metric for relevance and Recall for diversity.", "It should be noted that recall is not equivalent to other diversity metrics, e.g., distinct (Li et al., 2016a) and entropy (Zhang et al., 2018), which only depend on hypotheses.", "One potential issue of these metrics is that even randomly generated responses may yield a high diversity score.", "F1 is the harmonic average of these two and is used to measure the overall response quality.", "We conduct a human evaluation using crowdwork-ers.", "For each hypothesis, given its context, we ask three annotators to individually measure the quality, on a scale of 1 to 5, in terms of two aspects: relevance and interest.", "Interestingness is treated as an estimation of the diversity, as these two are often correlated.", "The hypotheses from all systems are shuffled before being provided to annotators.", "System names are invisible to the annotators.", "S2S+Sampling: We consider a vanilla version of S2S model.", "The dimensions are similar to our model: both encoder and decoder consist of two stacked GRU cells with 128 hidden units, and the word embedding size is 128.", "As in the baseline in Zhao et al. (2017), we applied softmax sampling at inference time to generate multiple hypotheses.", "CVAE+BOW: For the CVAE conversation model, we use the original implementation and 6 We set the number of hypotheses equal to the number of references to encourage precision and recall have comparable impact on F1 hyperparameters of Zhao et al. (2017) with the bag-of-words (BOW) loss.", "The number of trainable model parameters is 15.4M, which is much larger than our model (3.2M).", "MTask: Since our approach utilizes a multi-task learning scheme, we also compare it against a vanilla multi-task learning model, MTask, similar to (Luan et al., 2017), to illustrate the effect of space fusion.", "The model architecture and hyperparameters are identical to the proposed model except that the loss function is L = log p ( y | z S2S ) log p ( y | z AE ) .", "In this section, we undertake an in-depth analysis to verify whether the latent space induced by our method manifests desirable properties, namely: 1) disentangled space structure between relevance and diversity, 2) homogeneous space distribution in which semantics changes smoothly without holes.", "We first provide a qualitative investigation based on real examples.", "Then, we present a set of corpus-level quantitative analyses focused on geometric properties.", "In Table 2, we investigate three different directions from the context Anyone want to start this game? , which is a real example taken from Reddit.", "The three different directions correspond to clearly different semantics: No I don't , when? and Yes I do. If we generate a response with the vector predicted by the S2S model ( u = 0 ), our model outputs I would love to play this game which is highly relevant to the context.", "Now as we move along each direction, we can see our model gradually transforms the response toward the corresponding responses of each direction.", "For instance, towards No I don't , our model gradually transforms the response to I am not interested in the game ( u = 0 . 18 ) and then I am not interested. ( u = 0 . 21 ).", "In contrary, towards Yes I do , the response transforms to I would love to play it. ( u = 0 . 15 ).", "Besides the positive or negative directions, the same transition applies to other directions such as When? .", "This example clearly shows that there is a rough correspondence context x : Anyone want to start this game?", "between geometric properties and semantic properties in the latent space induced by our method as shown in Figure 1 the relevance of the response decreases as we move away from the predicted response vector and different directions are associated with semantically different responses.", "In order to quantitatively verify the correspondence between direction and diversity , we visualize the distribution of cosine similarities among multiple references for each context for a set of 1000 random samples drawn from the test dataset.", "Specifically, for a context x k and its associated reference responses [ y k, 0 , y k, 1 , ] , we compute the cosine similarity between z AE ( y k,i ) z S2S ( x k ) and z AE ( y k,j ) z S2S ( x k ) .", "In Figure 3, we compare the distribution of our model with that of MTask, which does not employ our regularization terms.", "While our method yields a bell shaped curve with average cosine similarity being close to zero (0.38), the distribution of MTask is extremely skewed with average cosine similarity being close to 1 (0.95).", "This indicates that the directions of the reference responses are more evenly distributed in our latent space whereas everything is packed in a narrow band in the MTask's space.", "This essentially makes the inference process simple and robust in that one can choose arbitrary directions to generate diverse responses.", "alize the perplexity of reference responses along the path from the associated z S2S ( u = 0 ) to the z AE ( u = 1 ) corresponding to the predicted response.", "In Figure 4, we compare our model with MTask, which as already noted, does not employ our regularization terms.", "While our model shows a gradual increase in perplexity, there is a huge bump for MTask's line.", "This clearly indicates that there is a rough correspondence between distance and relevance in our latent space whereas even a slight change can lead to an irrelevant response in the MTask's space.", "We further illustrate the smooth change in relevance according to distance for a specific example in Table", "3. Given the context Anyone want to start this game? , our model interp = AE interp = S2S w/o regularization w.", "is able to transition from the predicted response I would love to play this game to a one of reference responses Yes I do .", "The relevance smoothly descreases, generating intermediate responses such as I would love to play it. In contrary, the MTask model tends to produce irrelevant or ungrammatical responses as it moves away from the predicted response.", "Other desirable properties, with which we want to equip our latent space are homogeneity and convexity .", "If the space is not homogeneous, we have to sample differently depending on the regional traits.", "If the space is not convex, we have to worry about running into the holes that are not properly associated with valid semantic meanings.", "In order to verify homogeneity and convexity, we visualize our latent space in a 2D space produced by the multidimensional scaling (MDS) algorithm (Borg and Groenen, 2003), which approximately preserves pairwise distance.", "For comparison, we also provide a visualization for MTask.", "As shown in Figure 5, our latent space offers great homogeneity and convexity regardless of which model is used to produce a dot (i.e. z S 2 S or z AE ).", "In contrary, MTask's latent space forms two separate clusters for z S 2 S and z AE with a large gap in-between where no training samples were mapped to.", "We let each system generate 100 hypotheses { h j } for each context x i in the test dataset.", "Assuming x i has N r,i references, we pick the top N r,i distinct hypotheses ranked by log p ( h j | x i )+ | h j | .", "Similar to (Li et al., 2016a; Wu et al., 2016), we takes | h j | into consideration, as BLEU is sensitive to length.", "For fair comparison, is tuned such that the average hypothesis length becomes roughly the same for all systems and approaches the average length of the references.", "7 The automatic evaluation results are reported in Table", "4. On both datasets, the proposed system consistently outperforms the baselines by a large margin in Precision, Recall, and F1.", "Examples of system outputs and human references can be found in Table 5 and Table 6 for Reddit and Switchboard, respectively.", "As shown in the examples, CVAE+BOW and other baseline models may generate diverse but not-so-relevant responses.", "We randomly sampled 500 contexts from the Reddit test dataset and picked the top 1 hypothesis generated for each context ranked by log p ( h j | x i ) + | h j | .", "As in the automatic evaluation, we tuned such that all systems have roughly 7 Approximately 10 words/tokens for Switchboard and 12 for Reddit context Everything about this movie is awesome!", "We also randomly select one reference for each context and compare them with the systems (labeled human in Table 7) As illustrated in Table 7, the proposed model outperforms all systems except human, consistent with our automatic evaluation results.", "We propose a SPACEFUSION model to jointly optimize diversity and relevance that leverages novel regularization terms to essentially fuse the latent space of a S2S model with that of an autoen-context", "coder model.", "This fused latent space exhibits desirable properties such as smooth semantic interpolation between two points.", "The distance and direction from the predicted response vector roughly match relevance and diversity, respectively.", "These properties also enable intuitive visualization of the latent space.", "Both automatic and human evaluation results demonstrate that the proposed approach brings significant improvement compared to strong baselines in terms of both diversity and relevance.", "In future work, we will provide theoretical justification of the effectiveness of the proposed regularization terms.", "We expect that this technique will find application as an efficient mixing board for conversation that draws on multiple sources of information." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "result", "abstain", "result", "result", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result" ]
[ "We develop a formal hierarchy of the expressive capacity of RNN architectures.", "The hierarchy is based on two formal properties: space complexity, which measures the RNN's memory, and rational recurrence, defined as whether the recurrent update can be described by a weighted finite-state machine.", "We place several RNN variants within this hierarchy.", "For example, we prove the LSTM is not rational, which formally separates it from the related QRNN (Bradbury et al., 2016).", "We also show how these models' expressive capacity is expanded by stacking multiple layers or composing them with different pooling functions.", "Our results build on the theory of saturated RNNs (Merrill, 2019).", "While formally extending these findings to unsaturated RNNs is left to future work, we hypothesize that the practical learnable capacity of unsaturated RNNs obeys a similar hierarchy.", "Experimental findings from training unsaturated networks on formal languages support this conjecture.", "While neural networks are central to the performance of today's strongest NLP systems, theoretical understanding of the formal properties of different kinds of networks is still limited.", "It is established, for example, that the Elman (1990) RNN is Turing-complete, given infinite precision and computation time (Siegelmann and Sontag, 1992, 1994; Chen et al., 2018).", "But tightening these unrealistic assumptions has serious implications for expressive power (Weiss et al., 2018), leaving a sig-nificant gap between classical theory and practice, which theorems in this paper attempt to address.", "Recently, Peng et al. (2018) introduced rational RNNs , a subclass of RNNs whose internal state can be computed by independent weighted finite automata (WFAs).", "Intuitively, such models have a computationally simpler recurrent update than Figure 1: Hierarchy of state expressiveness for saturated RNNs and related models.", "conventional models like long short-term memory networks (LSTMs; Hochreiter and Schmidhuber, 1997).", "Empirically, rational RNNs like the quasi-recurrent neural network (QRNN; Bradbury et al., 2016) and unigram rational RNN (Dodge et al., 2019) perform comparably to the LSTM, with a smaller computational budget.", "Still, the underlying simplicity of rational models raises the question of whether their expressive power is fundamentally limited compared to other RNNs.", "In a separate line of work, Merrill (2019) introduced the saturated RNN 1 as a formal model for analyzing the capacity of RNNs.", "A saturated RNN is a simplified network where all activation functions have been replaced by step functions.", "The saturated network may be seen intuitively as a sta-ble version of its original RNN, in which the in-1 Originally referred to as the asymptotic RNN .", "ternal activations act discretely.", "A growing body of workincluding this paperfinds that the saturated theory predicts differences in practical learnable capacity for various RNN architectures (Weiss et al., 2018; Merrill, 2019; Suzgun et al., 2019a).", "We compare the expressive power of rational and non-rational RNNs, distinguishing between state expressiveness (what kind and amount of information the RNN states can capture) and language expressiveness (what languages can be recognized when the state is passed to a classifier).", "To do this, we build on the theory of saturated RNNs.", "State expressiveness We introduce a unified hierarchy (Figure 1) of the functions expressible by the states of rational and non-rational RNN encoders.", "The hierarchy is defined by two formal properties: space complexity, which is a measure of network memory, 2 and rational recurrence, whether the internal structure of the RNN can be described by WFAs.", "The hierarchy reveals concrete differences between LSTMs and QRNNs, and further separates both from a class containing convolutional neural networks (CNNs, Lecun and Bengio, 1995; Kim, 2014), Elman RNNs, and gated recurrent units (GRU; Cho et al., 2014).", "We provide the first formal proof that LSTMs can encode functions that rational recurrences cannot.", "On the other hand, we show that the saturated Elman RNN and GRU are rational recurrences with constant space complexity, whereas the QRNN has unbounded space complexity.", "We also show that an unrestricted WFA has rich expressive power beyond any saturated RNN we considerincluding the LSTM.", "This difference potentially opens the door to more expressive RNNs incorporating the computational efficiency of rational recurrences.", "Language expressiveness When applied to clas-sification tasks like language recognition, RNNs are typically combined with a decoder: additional layer(s) that map their hidden states to a prediction.", "Thus, despite differences in state expressiveness, rational RNNs might be able to achieve comparable empirical performance to non-rational RNNs on NLP tasks.", "In this work, we consider the setup in which the decoders only view the final hidden state of the RNN.", "3 We demonstrate that 2 Space complexity measures the number of different configurations an RNN can reach as a function of input length.", "Formal definition deferred until Section", "2. 3 This is common, but not the only possibility.", "a sufficiently strong decoder can overcome some of the differences in state expressiveness between different models.", "For example, an LSTM can recognize a n b n with a single decoding layer, whereas a QRNN provably cannot until the decoder has two layers.", "However, we also construct a language that an LSTM can recognize without a decoder, but a QRNN cannot recognize with any decoder.", "Thus, no decoder can fully compensate for the weakness of the QRNN compared to the LSTM.", "Experiments Finally, we conduct experiments on formal languages, justifying that our theorems correctly predict which languages unsaturated recognizers trained by gradient descent can learn.", "Thus, we view our hierarchy as a useful formal tool for understanding the relative capabilities of different RNN architectures.", "Roadmap We present the formal devices for our analysis of RNNs in Section", "2. In Section 3 we develop our hierarchy of state expressiveness for single-layer RNNs.", "In Section 4, we shift to study RNNs as language recognizers.", "Finally, in Section 5, we provide empirical results evaluating the relevance of our predictions for unsaturated RNNs.", "In this work, we analyze RNNs using formal models from automata theoryin particular, WFAs and counter automata.", "In this section, we first define the basic notion of an encoder studied in this paper, and then introduce more specialized formal concepts: WFAs, counter machines (CMs), space complexity, and, finally, various RNN architectures.", "We view both RNNs and automata as encoders : machines that can be parameterized to compute a set of functions f : Q k , where is an input alphabet and Q is the set of rational reals.", "Given an encoder M and parameters , we use M to represent the specific function that the parameterized encoder computes.", "For each encoder, we refer to the set of functions that it can compute as its state expressiveness .", "For example, a deterministic finite state acceptor (DFA) is an encoder whose parameters are its transition graph.", "Its state expressiveness is the indicator functions for the regular languages.", "Formally, a WFA is a non-deterministic finite automaton where each starting state, transition, and", "final state is weighted.", "Let Q denote the set of states, the alphabet, and Q the rational reals.", "4 This weighting is specified by three functions:", "1. Initial state weights : Q Q 2. Transition weights : Q Q Q 3. Final state weights : Q Q The weights are used to encode any string x : Definition 1 (Path score) .", "Let be a path of the form q 0 x 1 q 1 x 2 x t q t through WFA A .", "The score of is given by A [ ] = ( q 0 ) (cid:0)(cid:81) ti =1 ( q i 1 , x i , q i ) (cid:1) ( q t ) .", "By ( x ) , denote the set of paths producing x .", "Definition 2 (String encoding) .", "The encoding computed by a WFA A on string x is A [ x ] = (cid:80) ( x ) A [ ] .", "Hankel matrix Given a function f : Q and two enumerations , of the strings in , we define the Hankel matrix of f as the infinite matrix [ H f ] ij = f ( i j ) .", "where denotes concatenation.", "It is sometimes convenient to treat H f as though it is directly indexed by , e.g. [ H f ] i , j = f ( i j ) , or refer to a sub-block of a Hankel matrix, rowand column-indexed by prefixes and suffixes P, S .", "The following result relates the Hankel matrix to WFAs: Theorem 1 (Carlyle and Paz, 1971; Fliess, 1974) .", "For any f : Q , there exists a WFA that computes f if and only if H f has finite rank.", "Rational series (Sakarovitch, 2009) For all k N , f : Q k is a rational series if there exist WFAs A 1 , , A k such that, for all x and 1 i k , A i [ x ] = f i ( x ) .", "We now turn to introducing a different type of encoder: the real-time counter machine (CM; Merrill, 2020; Fischer, 1966; Fischer et al., 1968).", "CMs are deterministic finite-state machines augmented with finitely many integer counters.", "While processing a string, the machine updates these counters, and may use them to inform its behavior.", "We view counter machines as encoders mapping Z k .", "For m N , { + , , } , let m denote the function f ( n ) = n m .", "Definition 3 (General CM; Merrill, 2020) .", "A k counter CM is a tuple (cid:104) , Q, q 0 , u, (cid:105) with", "1. A finite alphabet", "2. A finite set of states Q , with initial state q 0 3. A counter update function u : Q { 0 , 1 } k { 0 , 1 , +0 , +1 } k 4. A state transition function : Q { 0 , 1 } k Q A CM processes input tokens { x t } nt =1 sequentially.", "q t +1 = (cid:16) x t , q t , (cid:126) 1 =0 ( c t ) (cid:17) c t +1 = u (cid:16) x t , q t , (cid:126) 1 =0 ( c t ) (cid:17) ( c t ) ,", "where (cid:126) 1 =0 is a broadcasted zero-check operation, i.e., (cid:126) 1 =0 ( v ) i (cid:44) 1 =0 ( v i ) .", "In (2) and (3), note that the machine only views the zeroness of each counter, and not its actual value.", "A general CM's encoding of a string x is the value of its counter vector c t after processing all of x .", "1. A CM is -restricted iff u and depend only on the current input .", "2. A CM is ( Q ) -restricted iff u and depend only on the current input and the current state q Q .", "3. A CM is w -restricted iff it is ( Q ) restricted, and the states Q are windows over the last w input tokens, e.g., Q = w .", "5 These restrictions prevent the machine from being counter-aware: u and cannot condition on the counters' values.", "As we will see, restricted CMs have natural parallels in the realm of rational RNNs.", "In Subsection 3.2, we consider the relationship between counter awareness and rational recurrence.", "As in Merrill (2019), we also analyze encoders in terms of state space complexity, measured in bits.", "Definition 4 (Bit complexity) .", "An encoder M : Q k has T ( n ) space iff max (cid:12) (cid:12) { s M ( x ) | x n } (cid:12)(cid:12) = 2 T ( n ) , 5 The states q <w represent the beginning of the sequence, before w input tokens have been seen.", "where s M ( x ) is a minimal representation 6 of M 's internal configuration immediately after x .", "We consider three asymptotic space complexity classes: (1) , (log n ) , and ( n ) , corresponding to encoders that can reach a constant, polynomial, and exponential (in sequence length) number of configurations respectively.", "Intuitively, encoders that can dynamically count but cannot use more complex memory like stackssuch as all CMsare in (log n ) space.", "Encoders that can uniquely encode every input sequence are in ( n ) space.", "A saturated neural network is a discrete approximation of neural network considered by Merrill (2019), who calls it an asymptotic network.", "Given a parameterized neural encoder M ( x ) , we construct the saturated network sM ( x ) by taking sM ( x ) = lim N M N ( x ) (4) where N denotes the parameters multiplied by a scalar N .", "This transforms each squashing function (sigmoid, tanh, etc.) to its extreme values (0, 1).", "In line with prior work (Weiss et al., 2018; Merrill, 2019; Suzgun et al., 2019b), we consider saturated networks a reasonable approximation for analyzing practical expressive power.", "For clarity, we denote the saturated approximation of an architecture by prepending it with s, e.g., s-LSTM.", "A recurrent neural network (RNN) is a parameterized update function g : Q k Q d x Q k , where are the rational-valued parameters of the RNN and d x is the dimension of the input vector.", "g takes as input a current state h Q k and input vector x Q d x , and produces the next state.", "Defining the initial state as h 0 = 0 , an RNN can be applied to an input sequence x ( Q d x ) one vector at a time to create a sequence of states { h t } t | x | , each representing an encoding of the prefix of x up to that time step.", "RNNs can be used to encode sequences over a finite alphabet x by first applying a mapping (embedding) e : Q d x .", "6 I.e., the minimal state representation needed to compute M correctly.", "This distinction is important for architectures like attention, for which some implementations may retain unusable information such as input embedding order.", "states h 1 , h 2 , ..., h | x | generated by each RNN on its input is fed as input to the layer above it, and only the first layer receives the original input sequence x as input.", "The recurrent update function g can take several forms.", "The original and most simple form is that of the Elman RNN .", "Since then, more elaborate forms using gating mechanisms have become popular, among them the LSTM, GRU, and QRNN.", "Elman RNNs (Elman, 1990) Let x t be a vector embedding of x t .", "For brevity, we suppress the bias terms in this (and the following) affine operations.", "We refer to the saturated Elman RNN as the s-RNN", "The s-RNN has (1) space (Merrill, 2019).", "LSTMs (Hochreiter and Schmidhuber, 1997) An LSTM is a gated RNN with a state vector h t Q k and memory vector c t Q k .", "7 f t = ( W f x t + U f h t 1 ) (6) i t = ( W i x t + U i h t 1 ) (7) o t = ( W o x t + U o h t 1 ) (8) c t = tanh( W c x t + U c h t 1 ) (9) c t = f t (cid:12) c t 1 + i t (cid:12) c t (10) h t = o t (cid:12) tanh( c t ) .", "(11)", "The LSTM can use its memory vector c t as a regis-ter of counters (Weiss et al., 2018).", "Merrill (2019) showed that the s-LSTM has (log n ) space.", "GRUs (Cho et al., 2014) Another kind of gated RNN is the GRU.", "z t = ( W z x t + U z h t 1 ) (12) r t = ( W r x t + U r h t 1 ) (13) u t = tanh (cid:0) W u x t + U u ( r t (cid:12) h t 1 ) (cid:1) (14) h t = z t (cid:12) h t 1 + (1 z t ) (cid:12) u t .", "(15)", "Weiss et al. (2018) found that, unlike the LSTM, the GRU cannot use its memory to count dynamically.", "Merrill (2019) showed the s-GRU has (1) space.", "7 With respect to our presented definition of RNNs, the concatenation of h t and c t can be seen as the recurrently updated state.", "However in all discussions of LSTMs we treat only h t as the LSTM's state', in line with common practice.", "QRNNs Bradbury et al. (2016) propose QRNNs as a computationally efficient hybrid of LSTMs and CNNs.", "Let denote convolution over time, let W z , W f , W o Q d x w k be convolutions with window length w , and let X Q n d x denote the matrix of n input vectors.", "An ifo -QRNN (hence-forth referred to as a QRNN) with window length w is defined by W z , W f , and W o as follows: Z = tanh( W z X ) (16) F = ( W f X ) (17) O = ( W o X ) (18) c t = f t (cid:12) c t 1 + i t (cid:12) z t (19) h t = o t (cid:12) c t (20) where z t , f t , o t are respectively rows of Z , F , O .", "A QRNN Q can be seen as an LSTM in which all uses of the state vector h t have been replaced with a computation over the last w input tokensin this way it is similar to a CNN.", "The s-QRNN has (log n ) space, as the analysis of Merrill (2019) for the s-LSTM directly applies.", "Indeed, any s-QRNN is also a ( w )-restricted CM extended with = 1 (set to 1 ) operations.", "We now turn to presenting our results.", "In this section, we develop a hierarchy of single-layer RNNs based on their state expressiveness.", "A set-theoretic view of the hierarchy is shown in Figure", "2. Let R be the set of rational series.", "The hierarchy relates (log n ) space to the following sets: RR As in Peng et al. (2018), we say that An encoder is rationally recurrent (RR) iff its state expressiveness is a subset of R .", "RR-hard An encoder is RR-hard iff its state expressiveness contains R .", "A Turing machine is RR-hard, as it can simulate any WFA.", "RR-complete Finally, an encoder is RR-complete iff its state expressiveness is equivalent to R .", "A trivial example of an RR-complete encoder is a vector of k WFAs.", "The different RNNs are divided between the intersections of these classes.", "In Subsection 3.1, we prove that the s-LSTM, already established to have (log n ) space, is not RR.", "In Subsection 3.2, we demonstrate that encoders with restricted counting ability (e.g., QRNNs) are RR, and in Subsection 3.3, we show the same for all encoders with finite state (CNNs, s-RNNs, and s-GRUs).", "In Subsection 3.4, we demonstrate that none of these RNNs are RR-hard.", "In Appendix F, we extend this analysis from RNNs to self attention.", "We find that encoders like the s-LSTMwhich, as discussed in Subsection 2.3, is aware of its current counter valuesare not RR.", "To do this, we construct f 0 : { a, b } N that requires counter awareness to compute on strings of the form a b , making it not rational.", "We then construct an s-LSTM computing f 0 over a b .", "Definition 5 (Rectified counting) .", "f 0 : x (cid:55) (cid:40) # a b ( x ) if # a b ( x ) > 0 0 otherwise .", "Lemma", "1. For all f : { a, b } N , if f ( a i b j ) = f 0 ( a i b j ) for all i, j N , then f (cid:54) R .", "Proof.", "Consider the Hankel sub-block A n of H f with prefixes P n = { a i } i n and suffixes S n = { b j } j n .", "A n is lower-triangular: 0 0 0 1 0 0 2 1 0 ... ... ... ... .", "Therefore rank( A n ) = n 1 .", "Thus, for all n , there is a sub-block of H f with rank n 1 , and so rank( H f ) is unbounded.", "It follows from Theorem 1 that there is no WFA computing f .", "Theorem", "2. The s-LSTM is not RR.", "Proof.", "Assume the input has the form a i b j for some i, j .", "Consider the following LSTM 8 : i t = (cid:0) 10 Nh t 1 2 N 1 = b ( x t ) + N (cid:1) (22) c t = tanh (cid:0) N 1 = a ( x t ) N 1 = b ( x t ) (cid:1) (23) c t = c t 1 + i t c t (24) h t = tanh( c t ) .", "(25)", "Let N .", "Then i t = 0 iff x t = b and h t 1 = 0 (i.e. c t 1 = 0 ).", "Meanwhile, c t = 1 iff x t = a .", "The update term becomes i t c t = 1 if x t = a 1 if x t = b and c t 1 > 0 0 otherwise.", "(26)", "For a string a i b j , the update in (26) is equivalent to the CM in Figure", "3. Thus, by Lemma 1, the s-LSTM (and the general CM) is not RR.", "While the counter awareness of a general CM enables it to compute non-rational functions, CMs that cannot view their counters are RR.", "Theorem", "3. Any -restricted CM is RR.", "Proof.", "We show that any function that a restricted CM can compute can also be computed by a collection of WFAs.", "The CM update operations ( 1 , +0 , +1 , or 0 ) can all be reexpressed in terms of functions r ( x ) , u ( x ) : Z k to get: c t = r ( x t ) c t 1 + u ( x t ) (27) c t = (cid:80) t i =1 (cid:16)(cid:81) t j = i +1 r ( x j ) (cid:17) u ( x i ) .", "A WFA computing [ c t ] i is shown in Figure", "4. 8 In which f t and o t are set to 1 , such that c t = c t 1 + i t c t .", "The WFA in Figure 4 also underlies unigram rational RNNs (Peng et al., 2018).", "Thus, -restricted CMs are actually a special case of unigram WFAs.", "In Appendix A, we show the more general result: Theorem", "4. Any ( Q ) -restricted CM is RR.", "In many rational RNNs, the updates at different time steps are independent of each other outside of a window of w tokens.", "Theorem 4 tells us this independence is not an essential property of rational encoders.", "Rather, any CM where the update is conditioned by finite state (as opposed to being conditioned by a local window) is in fact RR.", "Furthermore, since ( w ) -restricted CMs are a special case of ( Q ) -restricted CMs, Theorem 4 can be directly applied to show that the s-QRNN is RR.", "See Appendix A for further discussion of this.", "Theorem 4 motivates us to also think about finite-space encoders: i.e., encoders with no counters where the output at each prefix is fully determined by a finite amount of memory.", "The following lemma implies that any finite-space encoder is RR: Lemma", "2. Any function f : Q computable by a (1) -space encoder is a rational series.", "Proof.", "Since f is computable in (1) space, there exists a DFA A f whose accepting states are isomorphic to the range of f .", "We convert A f to a WFA by labelling each accepting state by the value of f that it corresponds to.", "We set the starting weight of the initial state to 1 , and 0 for every other state.", "We assign each transition weight 1 .", "Since the CNN, s-RNN, and s-GRU have finite state, we obtain the following result: Theorem", "5. The CNN, s-RNN, and s-GRU are RR.", "While Schwartz et al. (2018) and Peng et al. (2018) showed the CNN to be RR over the max-plus semiring, Theorem 5 shows the same holds for (cid:104) Q , , + (cid:105) .", "While rational recurrence is often used to indicate the simplicity of an RNN architecture, we find in this section that WFAs are surprisingly computationally powerful.", "Figure 5 shows a WFA mapping binary string to their numeric value, proving WFAs have ( n ) space.", "We now show that none of our RNNs are able to simulate an arbitrary WFA, even in the unsaturated form.", "q 0 start q 1 / 1 /u i ( ) /r i ( )", "6. Both the saturated and unsaturated RNN, GRU, QRNN, and LSTM 9 are not RR-hard.", "Proof.", "Consider the function f b mapping binary strings to their value, e.g. 101 (cid:55) 5 .", "The WFA in Figure 5 shows that this function is rational.", "The value of f b grows exponentially with the sequence length.", "On the other hand, the value of the RNN and GRU cell is bounded by 1 , and QRNN and LSTM cells can only grow linearly in time.", "Therefore, these encoders cannot compute f b .", "In contrast, memory networks can have ( n ) space.", "Appendix G explores this for stack RNNs.", "Appendix F presents preliminary results extending saturation analysis to self attention.", "We show saturated self attention is not RR and consider its space complexity.", "We hope further work will more completely characterize saturated self attention.", "Having explored the set of functions expressible internally by different saturated RNN encoders, we turn to the languages recognizable when using them with a decoder.", "We consider the following setup:", "1. An s-RNN encodes x to a vector h t Q k .", "2. A decoder function maps the last state h t to an accept/reject decision, respectively: { 1 , 0 } .", "9 As well as CMs.", "We say that a language L is decided by an encoder-decoder pair e , d if d ( e ( x )) = 1 for every sequence x L and otherwise d ( e ( x )) = 0 .", "We explore which languages can be decided by different encoder-decoder pairings.", "Some related results can be found in Cortes and Mohri (2000), who study the expressive power of WFAs in relation to CFGs under a slightly different definition of language recognition.", "parameterized by w and b .", "For an encoder architecture E , we denote by D 1 ( E ) the set of languages decidable by E with d 1 .", "We use D 2 ( E ) analogously for a 2-layer decoder with 1 > 0 activations, where the first layer has arbitrary width.", "We refer to sets of strings using regular expressions, e.g. a = { a i | i N } .", "To illustrate the purpose of the decoder, consider the following language: L = { x { a, b } | # a b ( x ) 0 } .", "The Hankel sub-block of the indicator function for L over P = a , S = b is lower triangular.", "Therefore, no RR encoder can compute it.", "However, adding the D 1 decoder allows us to compute this indicator function with an s-QRNN, which is RR.", "We set the s-QRNN layer to compute the simple series c t = # a b ( x ) (by increasing on a and decreasing on b ).", "The D 1 layer then checks c t 0 .", "So, while the indicator function for L is not itself rational, it can be easily recovered from a rational representation.", "Thus, L D 1 (s-QRNN).", "We compare the language expressiveness of several rational and non-rational RNNs on the following:", "a n b n is more interesting than L because the D 1 decoder cannot decide it simply by asking the encoder to track # a b ( x ) , as that would require it to compute the non-linearly separable =0 function.", "Thus, it appears at first that deciding a n b n with D 1 450 might require a non-rational RNN encoder.", "Let denote stacking two layers.", "We will go on to discuss the following results: a n b n D 1 ( WFA ) (33) a n b n D 1 ( s-LSTM ) (34) a n b n (cid:54) D 1 ( s-QRNN ) (35) a n b n D 1 ( s-QRNN s-QRNN ) (36) a n b n D 2 ( s-QRNN ) (37) a n b n D 1 ( s-LSTM ) (38) a n b n / D ( s-QRNN ) for any D (39) a n b n { (cid:15) } D 1 ( s-QRNN s-QRNN ) (40) WFAs (Appendix B) In Theorem 8 we present a function f : Q satisfying f ( x ) > 0 iff x a n b n , and show that H f has finite rank.", "It follows that there exists a WFA that can decide a n b n with the D 1 decoder.", "Counterintuitively, a n b n can be recognized using rational encoders.", "QRNNs (Appendix C) Although a n b n D 1 ( WFA ) , it does not follow that every rationally recurrent model can also decide a n b n with the help of D 1 .", "Indeed, in Theorem 9, we prove that a n b n / D 1 ( s-QRNN ) , whereas a n b n D 1 ( s-LSTM ) (Theorem 13).", "It is important to note that, with a more complex decoder, the QRNN could recognize a n b n .", "For example, the s-QRNN can encode c 1 = # a b ( x ) and set c 2 to check whether x contains ba , from which a D 2 decoder can recognize a n b n (Theorem 10).", "This does not mean the hierarchy dissolves as the decoder is strengthened.", "We show that a n b n which seems like a trivial extension of a n b n is not recognizable by the s-QRNN with any decoder.", "This result may appear counterintuitive, but in fact highlights the s-QRNN's lack of counter awareness: it can only passively encode the information needed by the decoder to recognize a n b n .", "Failing to recognize that a valid prefix has been matched, it cannot act to preserve that information after additional input tokens are seen.", "We present a proof in Theorem 11.", "In contrast, in Theorem 14 we show that the s-LSTM can directly encode an indicator for a n b n in its internal state.", "Proof sketch: a n b n / D ( s-QRNN ) .", "A sequence s 1 a n b n is shuffled to create s 2 / a n b n with an identical multi-set of counter updates.", "10 Counter updates would be order agnostic if not for reset operations, and resets mask all history, so extending s 1 and s 2 with a single suffix s containing all of their w -grams reaches the same final state.", "Then for any D , D ( s-QRNN ) cannot separate them.", "We formalize this in Theorem 11.", "We refer to this technique as the suffix attack , and note that it can be used to prove for multiple other languages L D 2 ( s-QRNN ) that L is not in D ( s-QRNN ) for any decoder D .", "2-layer QRNNs Adding another layer overcomes the weakness of the 1-layer s-QRNN, at least for deciding a n b n .", "This follows from the fact that a n b n D 2 ( s-QRNN ) : the second QRNN layer can be used as a linear layer.", "Similarly, we show in Theorem 10 that a 2-layer s-QRNN can recognize a n b n { (cid:15) } .", "This suggests that adding a second s-QRNN layer compensates for some of the weakness of the 1-layer s-QRNN, which, by the same argument for a n b n cannot recognize a n b n { (cid:15) } with any decoder.", "Finally, we study the theoretical case where the decoder is an arbitrary recursively enumerable (RE) function.", "We view this as a loose upper bound of stacking many layers after a rational encoder.", "What information is inherently lost by using a rational encoder?", "WFAs can uniquely encode each input, making them Turing-complete under this setup; however, this does not hold for rational s-RNNs.", "RR-complete Assuming an RR-complete encoder, a WFA like Figure 5 can be used to encode each possible input sequence over to a unique number.", "We then use the decoder as an oracle to decide any RE language.", "Thus, an RR-complete encoder with an RE decoder is Turing-complete.", "Bounded space However, the (log n ) space bound of saturated rational RNNs like the s-QRNN means these models cannot fully encode the input.", "In other words, some information about the prefix x : t must be lost in c t .", "Thus, rational s-RNNs are not Turing-complete with an RE decoder.", "10 Since QRNN counter updates depend only on the w grams present in the sequence.", "these predictions carry over to the learnable capacity of unsaturated RNNs.", "11 We compare the QRNN and LSTM when coupled with a linear decoder D 1 .", "We also train a 2 -layer QRNN (QRNN2) and a 1 -layer QRNN with a D 2 decoder (QRNN+).", "We train on strings of length 64 , and evaluate generalization on longer strings.", "We also compare to a baseline that always predicts the majority class.", "The results are shown in Figure", "6. We provide further experimental details in Appendix E. Experiment 1 We use the following language, which has similar formal properties to a n b n , but with a more balanced label distribution: L 5 = (cid:8) x ( a | b ) | | # a b ( x ) | < 5 (cid:9) .", "In line with (34), the LSTM decides L 5 perfectly for n 64 , and generalizes fairly well to longer strings.", "As predicted in (35), the QRNN cannot fully learn L 5 even for n = 64 .", "Finally, as predicted in (36) and (37), the 2 -layer QRNN and the QRNN with D 2 do learn L 5 .", "However, we see that they do not generalize as well as the LSTM for longer strings.", "We hypothesize that these multi-11 https://github.com/viking-sudo-rm/ rr-experiments layer models require more epochs to reach the same generalization performance as the LSTM.", "Experiment 2 We also consider a n b n .", "As predicted in (38) and (40), the LSTM and 2 -layer QRNN decide a n b n flawlessly for n = 64 .", "A 1 -layer QRNN performs at the majority baseline for all n with both a 1 and 2 -layer decoder.", "Both of these failures were predicted in (39).", "Thus, the only models that learned a n b n were exactly those predicted by the saturated theory.", "We develop a hierarchy of saturated RNN encoders, considering two angles: space complexity and rational recurrence.", "Based on the hierarchy, we formally distinguish the state expressiveness of the non-rational s-LSTM and its rational counterpart, the s-QRNN.", "We show further distinctions in state expressiveness based on encoder space complexity.", "Moreover, the hierarchy translates to differences in language recognition capabilities.", "Strengthening the decoder alleviates some, but not all, of these differences.", "We present two languages, both recognizable by an LSTM.", "We show that one can be recognized by an s-QRNN only with the help of a decoder, and that the other cannot be recognized by an s-QRNN with the help of any decoder.", "While this means existing rational RNNs are fundamentally limited compared to LSTMs, we find that it is not necessarily being rationally recurrent that limits them: in fact, we prove that a WFA can perfectly encode its inputsomething no saturated RNN can do.", "We conclude with an analysis that shows that an RNN architecture's strength must also take into account its space complexity.", "These results further our understanding of the inner working of NLP systems.", "We hope they will guide the development of more expressive rational RNNs.", "We appreciate Amir Yehudayoff's help in finding the WFA used in Theorem 8, and the feedback of researchers at the Allen Institute for AI, our anonymous reviewers, and Tobias Jaroslaw.", "The project was supported in part by NSF grant IIS-1562364, Israel Science Foundation grant no.1319/16, and the European Research Council under the EU's Horizon 2020 research and innovation program, grant agreement No. 802774 (iEXTRACT)." ]
[ "objective", "abstain", "abstain", "abstain", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "objective", "method", "result", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "method", "result", "result", "result", "result", "abstain", "other", "other" ]
[ "Temporal factors are tied to the growth of facts in realistic applications, such as the progress of diseases and the development of political situation, therefore, research on Temporal Knowledge Graph (TKG) attracks much attention.", "In TKG, relation patterns inherent with temporality are required to be studied for representation learning and reasoning across temporal facts.", "However, existing methods can hardly model temporal relation patterns, nor can capture the intrinsic connections between relations when evolving over time, lacking of interpretability.", "In this paper, we propose a novel temporal modeling method which represents temporal entities as Rotations in Q uaternion V ector S pace (RotateQVS) and relations as complex vectors in Hamilton's quaternion space.", "We demonstrate our method can model key patterns of relations in TKG, such as symmetry, asymmetry, inverse, and can further capture time-evolved relations by theory.", "Empirically, we show that our method can boost the performance of link prediction tasks over four temporal knowledge graph benchmarks.", "Knowledge Graphs (KGs) have been widely adopted to represent informative knowledge or facts in real-world applications (Bollacker et al., 2008; Miller, 1995; Suchanek et al., 2007).", "However, as known facts are usually sparse, KGs are far from completeness.", "Thus, Knowledge Graph Completion (KGC) methods are proposed to predict missing facts, i.e. links between entities (Bor-des et al., 2013; Yang et al., 2015; Dettmers et al., 2018; Chen et al., 2021b).", "Furthermore, in real world, many facts are bonded with a particular time by nature.", "For example, Barack Obama is the president of USA is only valid for the time period 2009 2017 .", "To model such time-sensitive facts, Temporal Knowledge Graphs (TKGs) have Corresponding author.", "recently drawn growing attention from both aca-demic and industrial communities (Lautenschlager et al., 2015; Leetaru and Schrodt, 2013).", "TKG Embedding (TKGE) methods (Jiang et al., 2016; Dasgupta et al., 2018; Jin et al., 2020; Sadeghian et al., 2021) were proposed to represent entities and relations with temporal features in TKGs (Lautenschlager et al., 2015; Leetaru and Schrodt, 2013).", "But how to present them with temporal interpretability remains a challenge for state-of-the-art TKGE models.", "Further, it is crucial for TKG Completion (TKGC) to leverage the learned temporal information.", "Previous static KGC works (Sun et al., 2020; Schlichtkrull et al., 2018; Gao et al., 2020) learn explainable embeddings of various relation patterns, so that symmetric pattern (e.g. co-author), asymmetric pattern (e.g. affili-ation), inverse pattern (e.g. buyer vs. seller) and complex composition pattern (e.g. father's wife (mother) vs. wife's father (father in law)) can be captured in static KGs.", "However, in TKGs, there are inherent connections between entities and their relations along with time-evolving.", "For example, the relation between Kit Harington and Rose Leslie is in love in 2012 , becomes engaged in 2017 , and then turns into married in 2018 .", "To the best of our knowledge, very few of the existing TKGE methods can capture them.", "To address this problem, we take inspirations from Hamilton's quaternion number system (Hamilton, 1844; Zhang et al., 2019a; Gao et al., 2020) and propose a novel method based on quaternion.", "To be specific, we encode both entities and relations as quaternion embeddings, and then temporal entity embeddings can be represented as Rotations in Q uaternion V ector S pace (Rotate-QVS).", "Theoretically, we show the limitations of previous methods and demonstrate that performing quaternion embeddings can model symmetric, asymmetric, and inverse relation patterns.", "Meanwhile, we prove our method is capable of cap-5843 turing time-evolving information in TKG explicably.", "We empirically evaluate our method over four TKGC benchmarks and report state-of-the-art performance consistently.", "Further, we perform analysis of the learned quaternion embeddings and show the abilities of our RotateQVS for modeling various relation patterns, including temporal evolution.", "1. We propose an original quaternion based TKGC method, namely RotateQVS, which represents temporal information as rotations in quaternion vector space.", "2. We study temporal evolving relations, and we demonstrate the proposed RotateQVS can model various relation patterns including temporal evolution both theoretically and empirically.", "3. Our RotateQVS outperforms the SOTA methods over all of ICEWS14, ICEWS05-15, YAGO11k and GDELT datasets on link prediction task.", "Quaternion number system (Hamilton, 1844) is an extension of traditional complex numbers.", "Recently, quaternion has been applied in static knowledge graph embedding (Zhang et al., 2019a; Gao et al., 2020).", "For readers better understanding our method in Section 3, we introduce the definition and basic operations of quaternion in this section.", "A quaternion is expressed as q = a + b i + c j + d k , and some key quaternion operations are defined as:", "Conjugate Similar to a traditional complex number, the conjugate of a quaternion is defined with the same real part and the opposite imaginary parts, that is q = a b i c j d k .", "Inner Product The inner product between q 1 = a 1 + b 1 i + c 1 j + d 1 k and q 2 = a 2 + b 2 i + c 2 j + d 2 k is the sum of product of each corresponding factor q 1 q 2 = a 1 , a 2 + b 1 , b 2 + c 1 , c 2 + d 1 , d 2 .", "Norm With the definition of conjugate and inner product, the norm of a quaternion is defined as: || q || = (cid:112) q q = (cid:112) q q = (cid:112) a 2 + b 2 + c 2 + d 2 (1) Inverse The inverse of a quaternion is defined from q 1 q = q q 1 = 1 .", "Multiplying by q , we have q q q 1 = q , derived from which we get: q 1 = q || q || 2 (2) Hamilton Product For two quaternions q 1 and q 2 , their product is determined by the products of the basis elements and the distributive law.", "The quaternion multiplication formula is: q 1 q 2 = ( a 1 a 2 b 1 b 2 c 1 c 2 d 1 d 2 ) + ( a 1 b 2 + b 1 a 2 + c 1 d 2 d 1 c 2 ) i + ( a 1 c 2 b 1 d 2 + c 1 a 2 + d 1 b 2 ) j + ( a 1 d 2 + b 1 c 2 c 1 b 2 + d 1 a 2 ) k (3) Considering the conjugate of Hamilton product, we can further deduce: q 1 q 2 = q 2 q 1 , q 1 q 2 q 3 = q 3 q 2 q 1 .", "In fact, the imaginary part b i + c j + d k of a quaternion behaves like a vector v = ( b, c, d ) in a 3D vector space.", "Thus, conveniently, we rewrite a quaternion using imaginary vector s: q = a + b i + c j + d k = a + v = ( a, 0 )+(0 , v ) .", "where v 1 v 2 is vector cross product, resulting in a vector, and v 1 v 2 is the dot product, which gives a scalar.", "Obviously, the multiplication of two imaginary vectors is non-commutative, as the cross product is non-commutative.", "Thus, the multiplication of two quaternions can be rewritten in 3D vector perspective: q 1 q 2 = ( a 1 , v 1 ) ( a 2 , v 2 ) =( a 1 a 2 v 1 v 2 , a 1 v 2 + a 2 v 1 + v 1 v 2 ) (7) 5844 3 Proposed Method In this section, we introduce a novel temporal modeling approach for TKG by representing temporal information as Rotations in Q uaternion V ector S pace (RotateQVS).", "Suppose that we have a temporal knowledge graph, noted as G .", "We use E to denote the set of entities, R to denote the set of relations, and T to denote the set of time stamps.", "Then, the temporal knowledge graph G can be defined as a collection of quadruples, noted as ( s, r, o, t ) , where a relation r R holds between a head entity s E and an tail entity o E at time t .", "The actual time t is represented by a time stamp T .", "Similar to Tero (Xu et al., 2020a) which utilizes a rotation in complex space, we also represent temporal information using rotations while in the quaternion vector space.", "In 3D vector space, according to Euler's rotation theorem (Euler, 1776; Verhoeff, 2014), any rotation or sequence of rotations of a rigid body or a coordinate system about a fixed point is equivalent to a single rotation by a given angle about a fixed axis (called the Euler axis) that runs through the fixed point.", "And an extension of Euler's formula for quaternion can be expressed as follows: q = e 2 ( v x i + v y j + u z k ) = cos 2 + ( v x i + v y j + u z k ) sin 2 , (8) where i , j , k are unit vectors representing the three Cartesian axes.", "3.2.1 Representing Time, Entities, and Relations: Quaternions provide us with a simple way to encode this axisangle representation in four numbers, and can be used to perform the rotation procedure in 3D vector space.", "By doing so, we constrain the time stamp embedding as a unit quaternion as = cos 2 + u sin 2 , (9) where u is a unit vector in the quaternion space.", "And for other elements of a quadruple ( s, r, o, t ) , based on the Hamilton's quaternions in Section 2, Figure 1: An illustration of the proposed rotation in 3D vector space, where v is the result of vector v rotating around the rotation axis u .", "s = { a s + b s i + c s j + d s k } r = { a r + b r i + c r j + d r k } o = { a o + b o i + c o j + d o k } ,", "where a { .", "} , b { .", "} , c { .", "} , d { .", "} R k .", "We make use of the quaternion rules to represent temporal information as rotations in 3D vector space.", "An abstract rotation procedure is illustrated in Figure 1. Theorem 1. Given a unit quaternion q = cos 2 + u sin 2 , where u R i + R j + R k is a unit vector (rotation axis) in a three-dimensional space, the result of vector v rotating around the rotation axis u is v = q v q 1 = q v q .", "(11)", "Theorem 1 is supported by Rodrigues' rotation formula (Rodrigues, 1840).", "1 We then define the functional mapping that reflects the temporal evolution of an entity embedding.", "For each time stamp , the functional mapping is an element-wise rotation from the basic entity embedding e (quaternion representation) to the time-specific entity embedding e t , which is as follows: e t = e 1 = ( a e + v e ) 1 = a e 1 + v e 1 = a e + v e 1 , (12) where a e and v e are the scalar/real and vec-tor/imaginary part of the entity quaternion representation e respectively.", "And according to Theorem 1, v e 1 is the result of vector v e rotating around the rotation axis u ( = cos 2 + u sin 2 , see 1 See proof in Appendix A 5845 Equation 9) which constitutes the vector/imaginary part of e t .", "Lemma 1. The vector (imaginary) part is rotated while the scalar (real) part remains unchanged in the functional mapping (Equation 12) which reflects the temporal evolution of an entity embedding.", "For a quadruple ( s, r, o, t ) , we make use of the functional mapping to get the time-specific entity embeddings s t and o t from the basic entity embeddings s and o : s t = s 1 , o t = o 1 .", "Considering the temporal evolution of entity embedding, the relation embedding r is regarded as a translation from the time-specific subject embedding s t to the conjugate of the time-specific object embedding o t .", "In other words, we aim to make s t + r = o t for all positive quadruples.", "Then, the score function can be defined as: f ( s, r, o, t ) = || s t + r o t || .", "Note that each embedding above is a quaternion representation, and || denotes the norm computation (see Equation 1).", "We use the same margin loss function with multiple negative sampling as proposed in (Sun et al., 2019), which has been proved to be effective on distance-based KGE models (Bordes et al., 2013; Sun et al., 2019) and as well as the TKGE models (Xu et al., 2019, 2020a).", "In details, our loss function is L = log ( f ( )) (cid:88) i =1 log ( f ( i ) ) , (15) where is the number of negative training samples over the positive one, is the positive training quadruple, ( ) denotes the sigmoid function, is a fixed margin, and i denotes the i -th negative sample generated by randomly corrupting the subject or the object of such as ( s , r, o, t ) and ( s, r, o , t ) .", "In this section, we demonstrate that our RotateQVS can model various relation patterns.", "In TKGE, four kinds of relation patterns are mostly considered and studied in previous static KGE and TKGE works (Sun et al., 2019; Gao et al., 2020).", "Their definitions are given as follows: Definition 1. A relation r is symmetric, if s, o, t , r ( s, o, t ) r ( o, s, t ) holds True.", "Definition 2. A relation r is asymmetric, if s, o, t , r ( s, o, t ) r ( o, s, t ) holds True.", "Definition 3. Relation r 1 is the inverse of r 2 , if s, o, t , r 1 ( s, o, t ) r 2 ( o, s, t ) holds True.", "Definition 4. Relation r 1 and r 2 are evolving over time from t 1 (time stamp 1 ) to t 2 (time stamp 2 ), if s, o , r 1 ( s, o, t 1 ) r 2 ( s, o, t 2 ) holds True.", "Comparing with other TKGE methods, we show RotateQVS can model all these four patterns, while previous methods (see Section 4.3) fail to do so.", "2 One advantage of applying quaternion embeddings is that our method supports all these relation patterns, while other representation forms cannot, such as TeRo (Xu et al., 2020a) using complex number system a + b i .", "3 As seen in our score function (Equation 14), our aim is to make s 1 + r = o 1 = o 1 o s = 1 r .", "(16)", "For temporal-evolution pattern, r 1 ( s, o, t 1 ) r 2 ( s, o, t 2 ) in Definition 4 can be expressed as: (cid:40) o s = 1 1 r 1 1 o s = 1 2 r 2 2 2 1 1 r 1 ( 2 1 1 ) 1 = r 2 .", "(17)", "In addition, based on Equation 17, we have 1 1 r 1 1 = 1 2 r 2 2 .", "(18) 2 Statistics of several baselines modeling on various relation patterns are summarised in Appendix E. 3 Theoretical analysis of TeRo's defect is shown in Section 3.4.", "Lemma 2. RotateQVS can model the symmetric pattern for TKG.", "(See proof in Appendix B)", "Lemma 3. RotateQVS can model the asymmetric pattern for TKG.", "(See proof in Appendix C)", "Lemma 4. RotateQVS can model the inversion pattern for TKG.", "(See proof in Appendix D)", "Lemma 5. RotateQVS can model the temporal-evolution pattern for TKG.", "Proof.", "For the same head entity and tail entity, if a relation r 1 holds at time t 1 (time stamp 1 ) and a relation r 2 holds at time t 2 (time stamp 2 ), we are supposed to get 2 1 1 r 1 ( 2 1 1 ) 1 = r 2 .", "Since we have Theorem 1, 1 1 r 1 1 and 1 2 r 2 2 can be regarded as rotations in quaternion vector space for r 1 and r 2 , respectively, which indicates the norm of r 1 is the same as that of r 2 .", "Furthermore, Lemma 1 indicates the rotation mapping keeps the scalar/real part unchanged for a vector.", "Thus, we can have the following deductions: (cid:40) || r 1 || = || r 2 || Re ( r 1 ) = Re ( r 2 ) .", "TeRo (Xu et al., 2020a) is the main baseline for our model.", "The rotated head entity embedding and tail entity embedding of TeRo in complex number system are s , and o respectively, where denotes Hermitian dot product.", "The translational score function of TeRo f ( s, r, o, t ) = || s t + r o t || is to make s + r = o = o = o .", "And we further prove that TeRo can not model relations with temporal evolution by means of reduction to absurdity.", "4 To this end, taking advantages of quaternion representation, our RotateQVS can deduce further derivation: s 1 + r = o 1 = o 1 o s = 1 r , (21) where time stamp embeddings and relation embeddings can be particularly extracted to analyse the influence of temporal evolution on relations, 4 See proof in Appendix F. Dataset ICEWS14 ICEWS05-15 YAGO11k GDELT Entities 7,128 10,488 10,623 500 Relations 230 251 10 20 Time Stamps 365 4,017 70 366 Train 72,826 386,962 16,408 2,735,685 Validation 8,941 46,275 2,050 341,961 Test 8,963 46,092 2,051 341,961 Table 2: Statistics of four experimented datasets.", "since our derivation result is independent with entity embeddings.", "Above all, we demonstrate that our RotateQVS can model relations with temporal evolution while TeRo cannot.", "5 3.5 Complexity Comparison Table 1 summarizes the space complexities of several baselines and our model.", "n e , n r , n and n token denote numbers of entities, relations, time stamps, and temporal tokens used in (Garca-Durn et al., 2018); and d is the dimension of embeddings.", "The space complexity of our RotateQVS is O ( n e d + n r d + n d ) , the same as TTransE (Leblay and Chekol, 2018), HyTE (Dasgupta et al., 2018) and TeRo (Xu et al., 2020a).", "To evaluate our proposed Quaternion embeddings, we perform link prediction task on four commonly used TKG benchmark datasets, namely ICEWS14, ICEWS05-15 (Garca-Durn et al., 2018), YAGO11k (Dasgupta et al., 2018) and GDELT (Trivedi et al., 2017).", "6 Table 2 summarises the details of the four datasets, where it is easy to find ICEWS14 and ICEWS05-15 have more quantitative relations than the other two datasets.", "ICEWS (Lautenschlager et al., 2015) is a repository containing political events with a specific timestamp.", "ICEWS14 and ICEWS05-15 (Garca-Durn et al., 2018) are two subsets of ICWES corresponding to facts in 2014 and facts between 2005 and 2015.", "YAGO11k (Dasgupta et al., 2018) is a subset of YAGO3 (Mahdisoltani et al., 2015), where time annotations are represented as time intervals.", "We derive the dataset from HyTE (Dasgupta et al., 2018) 5 Proof process is shown in Lemma 5, and case based analysis is shown in Section 4.5.2.", "6 GDELT is derived from https://github.com/BorealisAI/ de-simple/tree/master/datasets/gdelt, and other datasets can be downloaded from https://github.com/soledad921/ATISE.", "to obtain the same year-level granularity by dropping the month and date information, which results in 70 different time stamps.", "For GDELT, we use the subset extracted by Trivedi et al., consisting of the facts from April 1, 2015 to March 31, 2016.", "We take the same pretreatment of the train, validation and test sets as (Goel et al., 2020), to make the problem into a TKGC rather than an extrapolation problem.", "Link prediction task that aims to infer incomplete time-wise fact with a missing entity ( ( s, r, ? , t ) or (? , r, o, t ) ) is adopted to evaluate the proposed model.", "During the inference, we follow the same procedure of Xu et al. to generate candidates.", "For a test sample ( s, r, o, t ) , we first generate candidate quadruples set C = { ( s, r, o, t ) : o E} { ( s, r, o, t ) : s E} by replacing s or o with all possible entities, and then rank all the quadruples by their scores (Equation 14) under the time-wise filtered settings (Xu et al., 2019; Goel et al., 2020).", "The performance is reported on the standard evaluation metrics: the proportion of correct triples ranked in top 1, 3 and 10 (Hits@1, Hits@3, and Hits@10), and Mean Reciprocal Rank (MRR).", "All the metrics (Hits@1, Hits@3, Hits@10 and MRR) are the higher the better.", "For all experiments, we report averaged results across 5 runs, and we omit the variance as it is generally low.", "We compare with both sota static and temporal KGE baselines.", "For static baselines, we use TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), RotatE (Sun et al., 2019), and QuatE (Zhang et al., 2019a).", "For TKGE methods, we consider TTransE (Leblay and Chekol, 2018), HyTE (Dasgupta et al., 2018), TA-DistMult (Garca-Durn et al., 2018), DE-SimplE (Goel et al., 2020), ATiSE (Xu et al., 2019), and TeRo (Xu et al., 2020a).", "7 Note that TeRo (Xu et al., 2020a) is also based on the idea of rotations, and thus we consider TeRo as a directly baseline.", "Because our quaternion representation ( a + bi + cj + dk ) doubles the embedding parameters of TeRo which uses complex representation ( a + bi ), we further adopt two models for fair comparisons:", "(i) TeRo-Large: TeRo using dou-7 See complexity comparison in Appendix 3.5.", "ble embedding dimension; 8", "(ii) RotateQVS-Small: The proposed RotateQVS with half embedding dimension.", "By doing so, their parameter complexities can be comparable with TeRo's.", "The experimental results over four TKG datasets are shown in Table 3. 9 Overall, TKGE methods are better than static KGE methods, which shows the effectiveness of modeling temporal information.", "For the proposed RotateQVS, we observe that our model outperforms all the baseline models over the four datasets across all metrics consistently.", "10 To demonstrate the superiority of the proposed quaternion method, we compare our RotateQVS with the direct baseline TeRo (Xu et al., 2020a).", "For fair comparisons of model sizes, we observe that our RotateQVS outperforms TeRo-Large and RotateQVS-Small outperforms TeRo.", "This shows our methods with quaternion embeddings makes great improvements, demonstrating our advantages.", "Specially, we see that our RotateQVS achieves more improvements on ICEWS14 and ICEWS05-15 datasets.", "We believe this is because these two datasets have much more quantitative relations (see Table 2) and it is also evident our method behaves better on datasets with complex relation patterns.", "To further demonstrate the learned quaternion embeddings and the ability of our model, we perform case studies on multiple relation patterns, through visualization and quantitative analysis on intuitive examples from ICEWS14.", "Since symmetric, asymmetric and inversion patterns have been discussed in previous work (Sun et al., 2019; Xu et al., 2020a), we present the case studies of them to Appendix J.", "As shown in Lemma 5, if a relation r 1 and a relation r 2 are evolving over time from t 1 (time stamp 1", "To analyse the temporal-evolution pattern, we focus on the relations between the same head and tail entities with different time stamps.", "For example, from ICEWS14, we observe a base fact (South Korea, Engage in negotiation, North Korea, 2014-02-12) and a temporal-evolution fact (South Korea, Sign formal agreement, North Korea, 2014-02-15) , where Sign formal agreement is considered as the consequence of Engage in negotiation .", "Thus, in our model, the embeddings of Sign formal agreement at time stamp 2014-02-15 and of Engage in negotiation at 2014-02-12 should satisfy Equation 22.", "To illustrate this pattern, we measure the matrix cosine similarity between r 2 (base) and 2 1 1 r 1 ( 2 1 1 ) 1 (temporal-evolved).", "For each true fact, we sample a random negative relation and show their similarity difference.", "Figure 2 illus-Figure 2: Density histogram with bin size 1% of similarity scores for temporal-evolution relations.", "All positive and negative examples are randomly sampled and compared with the base relation Engage in negotiation .", "trates the density histogram of similarities with random 250 fact quadruples at different time stamps between South Korea and North Korea .", "We observe that the distributions of positive examples and negative examples are distinct, which explains 5849 Head entity Relation Tail entity Time Similarity E xa m p l e 1 Base fact John Kerry Express intent to meet or negotiate Pietro Parolin 2014-01-13 0.810 True fact Consult 2014-01-16 Negative Detonate nuclear weapons 0.508 E xa m p l e 2 Base fact Member of Legislative (Govt) (Iran) Make statement Iran 2014-03-16 0.819 True Fact Make statement 2014-05-04 Negative Detonate nuclear weapons 0.492 E xa m p l e 3 Base fact Federal Bank Make a visit European Central Bank 2014-02-04 0.815 True fact Make statement 2014-02-25 Negative Receive inspectors 0.510 Table 4: Examples of temporal-evolution patterns in ICEWS14 dataset.", "our RotateQVS can model temporal-evolution patterns more effectively.", "Comparing with TeRo (Xu et al., 2020a), which is the main baseline for our model, we show TeRo cannot model this pattern theoretically (see Section 3.4).", "In addition, Figure 3 shows our quaternion representation do well in reflecting Equation 19, the sufficient and unnecessary deductions of theoretical analysis for temporal-evolution pattern.", "More examples of temporal-evolution pattern are shown in Table 4, where we use the relation in base fact and time information to get a generated embedding 2 2 2 1 1 1 1 r 1 ( 2 2 2 1 1 1 1 ) 1 , and also sample a random negative relation for each example.", "We compute the matrix cosine similarity between 2 2 2 1 1 1 1 r 1 ( 2 2 2 1 1 1 1 ) 1 and r 2 , and also compute the similarity between 2 2 2 1 1 1 1 r 1 ( 2 2 2 1 1 1 1 ) 1 and the embedding of another relation in the negative sample.", "Time stamps in negatives are taken as same as the true facts.", "The comparison between the two sets of results can once again prove the ability of our model in modeling this pattern.", "For convergence analysis, we consider two fair comparisons, where the compared two methods have the same number of parameters: 11 RotateQVS (blue solid line) vs. TeRo-Large (yellow solid line) and RotateQVS-Small (green dotted line) vs.", "TeRo (red dotted line) in Figure 4. We observe that RotateQVS and TeRo-Large converge at approximately the same rate, and so do RotateQVS-Small and TeRo.", "We can conclude that our proposed RotateQVS can achieve better results in comparisons of both large and small levels without sacrificing additional training efforts.", "Models working on Static Knowledge graph have been well studied (Zhang et al., 2019b; Xu et al., 2020b; Mao et al., 2020; Chen et al., 2021a) with semantic and structure information.", "Translation based methods, e.g. TransE (Bordes et al., 2013) and TransR (Lin et al., 2015), formalise the factual distance between a head entity s and a tail entity o with the translation carried out by the relation.", "Adopting tensor factorization with a bilinear transformation, semantic matching models, e.g. RESCAL (Nickel and Tresp, 2013) and DistMult (Yang et al., 2015), capture the semantic relevance of entities.", "Recently, more attention were paid to study various relation patterns.", "RotatE (Sun et al., 2019) treat each relation as a rotation so that symmetric/asymmetric, inversion and composition patterns can be inferred to predict missing links.", "Further, quaternion number system (Hamil-5850 ton, 1844) is applied to model more complex composition patterns in 3D space, such as Rotate3D (Gao et al., 2020) and QuatE (Zhang et al., 2019a).", "Many aforementioned methods (Dasgupta et al., 2018; Leblay and Chekol, 2018; Trivedi et al., 2017; Garca-Durn et al., 2018; Goel et al., 2020; Sadeghian et al., 2021) are extended from static Static KGs to TKGs.", "They integrate time information into previous static methods as independent features.", "Others study the dynamic evolution of TKG.", "ATiSE (Xu et al., 2019) regards the temporal evolution of entity and relation embeddings as combinations of trend component, seasonal component and random component.", "CyGNet (Zhu et al., 2021) proposes a time-aware copy-generation mechanism leveraging known facts in the past to predict unknown facts in the future.", "TeRo (Xu et al., 2020a) defines the temporal evolution of entity embedding as a rotation in the complex vector space.", "Inspired by TeRo, our RotateQVS further represents temporal entities as rotations in quaternion vector space and obtains more advantages.", "12 Modeling various temporal relation patterns (Goel et al., 2020; Xu et al., 2020a), especially the temporal-evolution patterns, is crucial for TKGE and the following TKGC.", "Zhang et al. mentions the time-evolution property, but does not make a systematic research on it.", "It remains an open research question with few researches.", "Our work (RotateQVS) takes inspirations from the idea of rotation and generalises it into the quaternion number system to model the complex temporal-evolution pattern that TeRo can hardly do.", "In this paper, we introduce a novel TKGC method RotateQVS which represents temporal information of TKGs as rotations in quaternion vector space.", "Targeting temporal interpretability, we theoretically analyse that RotateQVS can model various relation patterns and demonstrate it with extensive experiments.", "Compared to previous methods, RotateQVS has made significant improvements on link prediction tasks over four benchmark datasets.", "Furthermore, we show our RotateQVS has great advantages in modeling various relation patterns with temporal evolution.", "This work is supported by the Key R&D Program of Guangdong Province (No.2019B010136003), the National Natural Science Foundation of China (No. 61732004, 61732022)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "objective", "objective", "abstain", "result", "other" ]
[ "Leveraging domain knowledge is an effective strategy for enhancing the quality of inferred low-dimensional representations of documents by topic models.", "In this paper, we develop topic modeling with knowledge graph embedding (TMKGE), a Bayesian nonparametric model to employ knowledge graph (KG) embedding in the context of topic modeling, for extracting more coherent topics.", "Specifi-cally, we build a hierarchical Dirichlet process (HDP) based model to flexibly borrow information from KG to improve the interpretability of topics.", "An efficient online variational inference method based on a stick-breaking construction of HDP is developed for TMKGE, making TMKGE suitable for large document corpora and KGs.", "Experiments on three pub-lic datasets illustrate the superior performance of TMKGE in terms of topic coherence and document classification accuracy, compared to state-of-the-art topic modeling methods.", "Topic models, such as Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 2017) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003), play significant roles in helping machines interpret text documents.", "Topic models consider documents as a bag of words.", "Given the word information, topic models formulate documents as mixtures of latent topics, where these topics are generated via the multinomial distributions over words.", "Bayesian methods are utilized to extract topical structures from the document-word frequency representations of the text corpus.", "Without supervision, however, it is found that the topics generated from these models are often not interpretable (Chang et al., 2009; Mimno et al., 2011).", "In recent studies, incorporating knowledge of different forms as a supervision has become a powerful strategy for discovering meaningful topics (Andrzejewski et al., 2009).", "Most conventional approaches take prior domain knowledge into account to improve the topic coherence (Andrzejewski et al., 2009; Andrzejewski and Zhu, 2009; Hu et al., 2014; Jagarlamudi et al., 2012; Doshi-Velez et al., 2015).", "One commonly used domain knowledge is based on word correlations (Andrzejewski et al., 2009; Chen et al., 2013; Chen and Liu, 2014).", "For example, must-links and cannot-links among words are generated by domain experts to help topic modeling (Andrzejewski et al., 2009).", "Another useful form of knowledge for topic discoveries is based on word semantics (Andrzejewski and Zhu, 2009; Chemudugunta et al., 2008; Hu et al., 2014; Jagarlamudi et al., 2012; Doshi-Velez et al., 2015).", "In particular, word embedding (Pennington et al., 2014; Goldberg and Levy, 2014), in which bag of words are transformed into vector representations so that contexts are embedded into those word vectors, are used as semantic regularities to enhance topic models (Nguyen et al., 2015; Li et al., 2016; Das et al., 2015; Batmanghelich et al., 2016).", "Knowledge graph (KG) embedding (Bordes et al., 2013) learns a low-dimensional continuous vector space for entities and relations to preserve the inherent structure of KGs.", "Yao et al. (2017) proposes KGE-LDA to incorporate embeddings of KGs into topic models to extract better topic representations for documents and shows promising performance.", "However, KGE-LDA forces words and entities to have identical latent representations, which is a rather restrictive assumption that prevents the topic model from recovering correct underlying latent structures of the data, especially in scenarios where only partial KGs are available.", "This paper develops topic modeling with knowledge graph embedding (TMKGE), a hierarchical Dirichlet process (HDP) based model to extract more coherent topics by taking advantage of the KG structure.", "Unlike KGE-LDA, the proposed TMKGE allows for more flexible sharing of information between words and entities, by using a multinomial distribution to model the words and a multivariate Gaussian mixture to model the entities.", "With this approach, we introduce two proportional vectors, one for words and one for entities.", "In contrast, KGE-LDA only uses one, shared by both words and entities.", "Similar to HDP, TMKGE includes a collection of Dirichlet processes (DPs) at both corpus and document levels.", "The atoms of corpus-level DP form the base measure for document levels DPs of words and entities.", "Therefore, the atoms of corpus-level DP can represent word topics, entity mixture components, or both of them.", "Figure 1 provides an overview of TMKGE, where two sources of inputs, bag of words and KG embedding, extracted from corpus and KGs respectively, are passed into TMKGE.", "As a nonparametric model, TMKGE does not assume a fix number of topics or entity mixture components as constraints.", "Instead, it learns the number of topics and entity mixture components automatically from the data.", "Furthermore, an efficient online variational inference algorithm is developed, based on Sethuraman's stick-breaking construction of HDP (Sethuraman, 1994).", "We in fact construct stick-breaking inference in a mini-batch fashion (Wang et al., 2011; Bleier, 2013), to derive a more efficient and scalable coordinate-accent variational inference for TMKGE.", "Summary of contributions: TMKGE is a Bayesian nonparametric model to extract more coherent topics by taking advantage of knowledge graph structures.", "We introduce two proportional vectors for more flexible sharing of information between words and entities.", "We derive an efficient and scalable parameter estimation algorithm via online variational inference.", "Finally, we empirically demonstrate the effectiveness of TMKGE in topic discovering and document classification.", "Latent Dirichlet Allocation (LDA) (Blei et al., 2003) is a popular probabilistic model that learns latent topics from documents and words, by using Dirichlet priors to regularize the topic distributions.", "The generated topics from LDA models, however, are often not interpretable (Chang et al., 2009; Mimno et al., 2011), in part because LDA models are unsupervised without using prior knowledge or external resources.", "In recent years, prior knowledge are leveraged to guide the process of topic modeling (Andrze-jewski and Zhu, 2009; Hu et al., 2014; Jagarlamudi et al., 2012; Doshi-Velez et al., 2015).", "For example, the deep forest LDA (DF-LDA) model (Andrzejewski et al., 2009) is proposed to incorporate must-links and cannot-links among words into topic modeling.", "One weakness of the DF-LDA model is that the link information is domain-dependent.", "Later, general knowledge based LDA is introduced to leverage must-links from multiple domains (Chen et al., 2013).", "More recently, MetaLDA (Zhao et al., 2017) proposes to improve topic modeling by incorporating diverse meta information as priors for both document hyperparameter and word hyperparameter .", "Besides the word correlations, word semantics are also utilized as one type of useful knowledge for topic modeling (Chemudugunta et al., 2008; Hu et al., 2014; Jagarlamudi et al., 2012).", "Word embeddings, as a low-dimensional continuous vectors of words (Mikolov et al., 2013; Bengio et al., 2003; Pennington et al., 2014) are regarded to be an efficient representations of word semantics.", "Latent Feature Topic Modeling (LFTM) is proposed to use pre-trained word embeddings in topic modeling (Nguyen et al., 2015).", "It incorporates the embedding of a word and its topics into the traditional multinomial distribution over words as the probability function of topic modeling.", "Top-icVec extends LFTM by combining a word and its local contextual words together into the conventional multinomial distribution over words.", "It also learns embedding representations for topics (Li et al., 2016).", "Gaussian-LDA goes further to improve topic modeling (Das et al., 2015) by taking into considerations the continuous nature of word embeddings.", "Shi et al. (2017) constructs a more unified framework, STE (skip-gram topic embedding) to address the problem of polysemy.", "Li et al. (2019) proposes a unified framework TMSA (Topic Modeling and Sparse Autoencoder) to improve topic discovery and word embedding simultaneously via a mutual learning mechanism.", "Hu et al. (2016) proposes topic-based embeddings for learning from large knowledge graphs (KGE).", "KGE learns low-dimensional continuous vector space for both entities and relations to preserve the inherent structure of knowledge graphs.", "A Bayesian method is introduced by considering the embeddings of entities and relations as topics.", "Later, Yao et al. (2017) proposes knowledge graph embedding LDA (KGE-LDA) to encode entity embeddings learned from knowledge graphs into LDA and show that knowledge graph embeddings boost topic discoveries.", "Inspired by this work, we explore to utilize entity embeddings to encode prior knowledge for topic modeling.", "This section presents the TMKGE model and an efficient online variational inference for learning its parameters.", "We first provide a review of hierarchical Dirichlet process (HDP) (Teh et al., 2005).", "Dirichlet process (DP) (MacEachern and Muller, 1998) G DP ( 0 , G 0 ) , with a base measure G 0 and a concentration parameter 0 > 0 , is the distribution of a random probability measure G over a measurable space ( , B ) , such that for any measurable disjoint partition ( A 1 , ..., AQ ) of ,", "( G ( A 1 ) , ..., G ( AQ )) Dir ( 0 G 0 ( A 1 ) , ..., 0 G 0 ( AQ ))", "Hierarchical Dirichlet process (HDP) (Teh et al., 2005), introduced for dealing with multiple ( D ) groups of data, is a distribution over a set of random probability measures over ( , B ) : one probability measure G d DP ( 0 , G 0 ) for each group d { 1 , 2 , ..., D } , and a global probability measure G 0 DP ( 0 , H ) with a base measure H .", "Stick-breaking construction Teh et al. (2005) shows that the draws from G 0 and G d can be expressed as weighted sums of point masses: G 0 = (cid:88) k =0 k k , G d = (cid:88) k =0 dk k .", "A more convenient stick-breaking construction, especially for deriving closed-form variational inference (Wang et al., 2011), is Sethuraman (1994)'s construction, which proceeds as follows.", "First, the global-level DP draw is represented as (cid:48) k Beta (1 , 0 ) , k = (cid:48) k k 1 (cid:89) (cid:96) =1 (1 (cid:48) (cid:96) ) , Note that the distribution for = { k } k =1 is also commonly written as GEM ( 0 ) (Pit-man, 2002).", "Subsequently, the group-level draws are constructed as dt G 0 , (cid:48) dt = Beta (1 , 0 ) , dt = (cid:48) dt t 1 (cid:89) (cid:96) =1 (1 (cid:48) d(cid:96) ) , G d = (cid:88) t =1 dt dt .", "Alternatively, the group-level atoms { dt } t =1 can be represented as dt = c dt , where the auxiliary indicator variables c dt are independently drawn from a multinomial Mult ( ) .", "Teh et al. (2008) also proposes a collapsed inference method as an alternative of stick-breaking inference.", "However, following Fox et al. (2011), we stick to the uncollapsed HDP model considering our truncated Dirichlet process has more computational efficiency and is simple to implement.", "Figure 2 is the graphical representation of TMKGE.", "Let D denote the number of documents in the corpus, where each document d { 1 , 2 , ..., D } contains N ( w ) d words and N ( e ) d entities.", "Throughout this work, superscripts ( w ) and ( e ) indicate word and entity related parameters, respectively.", "In each document d , the n -th word is represented by w dn , where each word belongs to a vocabulary of size V , i.e., w dn { 1 , 2 , ..., V } .", "Furthermore, the P -dimensional embedding of the m -th entity is e dm , where the total number of unique entities in the corpus is E .", "We assume that entity embeddings are obtained from the com-plete knowledge graph, and hence they contain information independent of the corpus.", "In this paper, we use TransE (Bordes et al., 2013), a simple and effective tool for knowledge encoding, to calculate the embeddings of entities extracted from the documents.", "We should mention that we remove the normalization step of TransE and thus the output vectors ( e dm ) do not have unit (cid:96) 2 norm.", "TMKGE builds upon HDP for joint modeling of word topics and entity mixtures.", "At the corpus level, word topics and entity mixtures correspond to atoms of a Dirichlet process G 0 DP ( 0 , H ) .", "At the document level, word topics and entity mixture components are atoms of independent DPs, with shared base measure G 0 .", "Mathematically, for document d , we have G ( w ) d DP ( 0 , G 0 ) , G ( e ) d DP ( 0 , G 0 ) , where G ( w ) d and G ( e ) d are word and entity related DPs.", "Sethuraman's construction in (1) yields G ( w ) d = (cid:88) t =1 ( w ) dt ( w ) dt , G ( e ) d = (cid:88) t =1 ( e ) dt ( e ) dt .", "(2) These DPs are then used to assign words and entities to topics and mixture components, respectively.", "Using the mixing proportions of DPs in (2), we have p ( z ( w ) dn = t ) = ( w ) dt , p ( z ( e ) dn = t ) = ( e ) dt .", "In document d , let z ( w ) dn denote the topic assigned to the n -th word, and z ( e ) dm denote the mixture component assigned to the m -th entity.", "For simplicity, we use index t to denote both word and entity related atoms, although they can correspond to different atoms of the global DPs.", "The mixing proportions of corpus-level DP are used to map the document atoms to the shared global atoms.", "More precisely, we introduce the word and entity atoms mapping auxiliary variables c ( w ) d = { c ( w ) dt } t =1 and c ( e ) d = { c ( e ) dt } t =1 .", "TMKGE allows flexible sharing of information between knowledge graphs and documents.", "This is an important advantage, as in practice only partial relational information are available, and thus strictly forcing the topics and entity mixtures to share components may lead to reducing the power of model to correctly recover the latent structure of the data.", "Furthermore, the nonparametric nature of the model enables the automatic discovery of number of atoms for both words and entities, at document and corpus levels.", "Each atom of corpus DP ( G 0 ) corresponds to a set of parameters for both words and entities.", "Atom k contains topic-word Dirichlet distribution k = ( k 1 , ..., kV ) T , and entity Gaussian mixture parameters { k , k } .", "Given k and topic assignment variables, the generative process for n -th word of document d is z ( w ) dn Mult ( ( w ) d ) , ( w dn | z ( w ) dn = t, c ( w ) dt = k, k ) Mult ( k ) .", "where k and k are the mean vector and precision matrix of multivariate Gaussian distribution.", "Furthermore, we impose conjugate priors on both word and entity components parameters as: k Dir ( , ..., ) , k N (cid:0) m 0 , ( 0 k ) 1 (cid:1) , k Wishart ( 0 , W 0 ) .", "In this section, inspired by (Wang et al., 2011), we propose an online variational inference algorithm for efficient learning of TMKGE model parameters.", "We use a fully factorized variational distribution based on stick-breaking construction, and perform online mean-field variational inference.", "In addition to topic parameters k and entity mixture parameters { k , k } , other parameters of interest are corpus-level stick proportions (cid:48) = { (cid:48) k } k =1 , document-level stick proportions for words (cid:48) ( w ) d = { (cid:48) ( w ) dt } t =1 and entities (cid:48) ( e ) d = { (cid:48) ( e ) dt } t =1 , topic assignments for words z ( w ) d = { z ( w ) dn } N ( w ) d n =1 , mixture assignments for entities z ( e ) d = { z ( e ) dm } N ( e ) d m =1 , and mapping variables c ( w ) d and c ( e ) d .", "Denote ( w ) and ( e ) respectively the word and entity related parameters.", "Then the variational distribution factorizes as q ( (cid:48) , ( w ) , ( e ) ) = q ( (cid:48) ) q ( ( w ) ) q ( ( e ) ) .", "For corpus-level stick proportions, we assume a Beta distribution: q ( (cid:48) ) = K 1 (cid:89) k =1 Beta ( (cid:48) k | u k , v k ) , where the number of global atoms is truncated at K , thereby q ( (cid:48) K = 1) = 1 .", "For the word related parameters ( w ) , we have q ( ( w ) ) = q ( c ( w ) ) q ( z ( w ) ) q ( (cid:48) ( w ) ) q ( ) , q ( c ( w ) ) = D (cid:89) d =1 T 1 (cid:89) t =1 Mult ( ( w ) dt ) , q ( z ( w ) ) = D (cid:89) d =1 N ( w ) d (cid:89) n =1 Mult ( ( w ) dt ) , q ( (cid:48) ( w ) ) = D (cid:89) d =1 T 1 (cid:89) t =1 Beta ( (cid:48) dt | a ( w ) dt , b ( w ) dt ) , q ( ) = K (cid:89) k =1 Dir ( k ) .", "The variational distributions for entity related parameters have a similar form to the above distributions, except the Gaussian mixture parameters, which are expressed as follows: q ( k ) = N (cid:0) m k , ( k k ) 1 (cid:1) , q ( k ) = Wishart ( k , W k ) .", "In standard variational inference theory, the evidence lower bound (ELBO), which is the lower bound to the marginal log likelihood of the observed data, is maximized to find the best variational approximation to the true intractable posterior.", "Given the modeling framework of TMKGE, the ELBO can be written as L ( q ) = (cid:88) d (cid:40) E (cid:104) log (cid:16) p ( w d | c ( w ) d , z ( w ) d , ) p ( c ( w ) d | (cid:48) ) p ( z ( w ) d | (cid:48) ( w ) d ) p ( e d | c ( e ) d , z ( e ) d , , ) p ( c ( e ) d | (cid:48) ) p ( z ( e ) d | (cid:48) ( e ) d ) p ( (cid:48) ( w ) d | 0 ) p ( (cid:48) ( e ) d | 0 ) (cid:17)(cid:105) + H (cid:0) q ( c ( w ) d ) (cid:1) + H (cid:0) q ( z ( w ) d ) (cid:1) + H (cid:0) q ( (cid:48) ( w ) d ) (cid:1) + H (cid:0) q ( c ( e ) d ) (cid:1) + H (cid:0) q ( z ( e ) d ) (cid:1) + H (cid:0) q ( (cid:48) ( e ) d ) (cid:1)(cid:41) + E (cid:104) log (cid:16) p ( (cid:48) ) p ( ) p ( , ) (cid:17)(cid:105) + H (cid:0) q ( (cid:48) ) (cid:1) + H (cid:0) q ( ) (cid:1) + H (cid:0) q ( , ) (cid:1) , where H ( ) is the entropy term for variational distribution.", "By taking derivatives of this lower bound with respect to each variational parameter, we derive the coordinate ascent update steps.", "We develop an online variational inference for TMKGE, to process large datasets (Wang et al., 2011; Hoffman et al., 2010).", "Given the existing corpus-level parameters, first a document d is sampled and then its optimal document-level variational parameters are computed.", "For word related variational parameters, these updates include a ( w ) dt = 1 + (cid:88) n ( w ) dnt , b ( w ) dt = 0 + (cid:88) n T (cid:88) s = t +1 ( w ) dns , ( w ) dtk exp (cid:16) (cid:88) n ( w ) dns E q (cid:2) log p ( w dn | k ) (cid:3) E q (cid:2) log k (cid:3)(cid:17) , ( w ) dnt exp (cid:16) (cid:88) k ( w ) dtk E q (cid:2) log p ( w dn | k ) (cid:3) E q (cid:2) log ( w ) dt (cid:3)(cid:17) (3) where expectations are with respect to variational distributions and have closed forms.", "For entity related variational parameters, similar updates can be derived, with the term E q (cid:2) log p ( e dm | k , k ) (cid:3) replacing E q (cid:2) log p ( w dn | k ) (cid:3) .", "Following Wang et al. (2011), for the corpus-level variational parameters, we use the following gradients: kv = kv + + D (cid:88) t ( w ) dtk (cid:0) (cid:88) n ( w ) dnt I [ w dn = v ] (cid:1) , m k = m k + D (cid:80) m,t ( e ) dtk ( e ) dmt e dm + 0 m 0 Dr k + 0 , k = k + 0 + Dr k , k = k + 0 + Dr k , W k = W k + (cid:16) W 1 0 + D (cid:88) m,t ( e ) dtk ( e ) dmt e dm e Tdm (cid:17) 1 , u k = u k + 1 + D (cid:88) t (cid:0) ( w ) dtk + ( e ) dtk (cid:1) , v k = v k + 0 + D (cid:88) t K (cid:88) (cid:96) = k +1 (cid:0) ( w ) dt(cid:96) + ( e ) dt(cid:96) (cid:1) , (4) where r k is defined as (cid:80) m,t ( e ) dtk ( e ) dmt .", "The corpus-level parameters are then updated using these gradients (among them, the first, the fifth and the sixth are natural gradients while the other four are approximations from the posterior of Gaussian Wishart scale matrix W . It appears difficult to obtain natural gradients for those four.) and a learning rate parameter (cid:15) t .", "For instance, for topic-words distribution parameters we have + (cid:15) t 0 .", "(5) number of top words and PMI scores model parameters data source 5 10 15 20 25 30 TMKGE K=300, T=20 20 Newsgroups 20.8 91.1 210.0 380.0 602.0 876.0 HDP K=300, T=20 20.0 91.6 212.6 384.1 598.4 868.7 LDA K=100 13.5 64.6 163.4 285.0 455.2 671.1 KGE-LDA K=30 18.9 69.8 187.5 320.6 482.7 616.5 TMKGE K=300, T=20 NIPS 16.6 97.1 160.3 299.6 474.5 685.5 HDP K=300, T=20 16.7 66.8 157.2 280.2 444.0 643.1 LDA K=100 13.9 67.6 161.9 297.0 471.2 681.1 KGE-LDA K=30 14.3 97.2 163.4 285.3 453.3 645.4 TMKGE K=300, T=20 Ohsumed 21.6 123.3 237.3 407.7 624.2 895.5 HDP K=300, T=20 15.6 70.7 168.2 338.9 582.9 864.9 LDA K=100 11.9 65.6 131.9 257.0 481.2 691.1 KGE-LDA K=30 15.6 116.5 185.4 354.2 585.4 795.6 Table 1: Topic Coherence of all models on three datasets with different number of top words.", "The rest of corpus-level variational parameters in (4) can be similarly updated.", "To ensure that the parameters converge to a stationary point, the learning rate satisfies (Hoffman et al., 2010; Sato, 2001) (cid:80) t 0 =1 (cid:15) t 0 = and (cid:80) t 0 =1 (cid:15) 2 t 0 < .", "Following Wang et al. (2011), we use (cid:15) t 0 = ( 0 + t 0 ) , where (0 .", "5 , 1] and 0 > 0 .", "To improve the stability of online variational inference, we use a mini-batch of documents to compute the natural gradients.", "That is, the contribution of the single document d in (4) is replaced by sum of contributions of documents in the mini-batch S , and the factor D is replaced by D/ |S| .", "The overall scheme of online variational inference for TMKGE is shown in Algorithm 1.", "proposed TMKGE framework.", "while Stopping criterion is not met do Sample a random document d from the corpus.", "Update a ( w ) d , b ( w ) d , ( w ) d and ( w ) d using (3).", "Update a ( e ) d , b ( e ) d , ( e ) d and ( e ) d similar to (3).", "Compute the natural gradients using (4).", "Set (cid:15) t 0 = ( 0 + t 0 ) and t 0 t 0 + 1 .", "Update all corpus-level parameters as (5).", "We evaluate TMKGE on two experimental tasks and compare its performance to those of LDA, HDP and KGE-LDA.", "For LDA and HDP, we use the online variational inference implementations.", "More precisely, we will evaluate our framework by the test whether it finds coherent and meaningful topics and the test whether it can achieve good performance in document classification.", "We run our experiments on three popular datasets; 20 Newsgroups, NIPS and the Ohsumed corpus.", "The 20 Newsgroups dataset contains 18,846 documents evenly categorized into 20 different categories.", "The NIPS dataset contains 1,740 papers from the NIPS conference.", "The Ohsumed corpus is from the MEDLINE database.", "We consider the 13,929 unique Cardiovascular diseases abstracts in the first 20,000 abstracts of the year 1996.", "Each document in the set has one or more associated categories from the 23 disease categories.", "The documents belonging to multiple categories are eliminated so that 7,400 documents belonging to only one category remain.", "The datasets are tokenized with Stanford CoreNLP (Manning et al., 2014).", "After standard pre-processing (such as removing stop words), there are 20,881 distinct words in the 20 Newsgroups dataset, 14,482 distinct words in the NIPS dataset and 8,446 distinct words in the Ohsumed dataset.", "The knowledge graph we employ is WordNet (Miller, 1995).", "WordNet is a large lexical knowledge graph.", "Entities in WordNet are synonyms which express distinct concepts.", "Relations in WordNet mainly involve conceptual-semantic and lexical relations.", "We use a subset of WordNet (WN18) introduced in Bordes et al. (2011) and employed in Yao et al. (2017) as well.", "WN18 contains 151,442 triplets with 40,943 entities and 18 relations.", "We link tokenized words to entities in WN18 via NLTK (Bird and Loper, 2004).", "In the experiments, for each method, we report the results based on the hyperparameter settings that obtain the best performances.", "For TMKGE and HDP, we report the results for K = 300 , T = 20 and K = 100 , T = 10 cases.", "For LDA and KGE-LDA, respectively, we have K = 100 and K = 30 .", "Throughout this work we fix the dimension of entity embedding as P = 5 .", "For online variational inference, we run the algorithms for 1000 iterations, with mini-batch size of 100.", "We assess the performance of the proposed TMKGE model based on topic coherence.", "Topic coherence has been shown to be more consistent with human judgment than other typical topic model metrics such as perplexity (Chang et al., 2009; Newman et al., 2010).", "We perform both quantitative and qualitative analysis of the topics discovered by TMKGE, and compare its performance to those of LDA, HDP and KGE-LDA.", "We evaluate the coherence of discovered topics by the point-wise mutual information (PMI) Topic Coherence metric.", "The PMI Topic Coherence is implemented following Newman et al. (2010): PMI ( k ) = N (cid:88) j =2 j 1 (cid:88) i =1 log p ( w i , w j ) p ( w i ) p ( w j ) where k refers to a topic, N refers to the number of top words of k , p ( w i ) is the probability that w i appears in a document, p ( w i , w j ) is the probability that w i and w j co-occur in the same document.", "A higher PMI score implies a more coherent topic.", "Following KGE-LDA, 4,776,093 Wikipedia articles are employed for obtaining topic coherence scores.", "Different from Yao et al. (2017), which used a fixed value of N (the number of top words, e.g. N = 5 or N = 10 ), we vary N in a range from 5 to 30 .", "(Lau and Baldwin, 2016) suggests that calculating topic coherence over several different cardinalities and averaging results in a substantially more stable evaluation.", "Table 1 shows the average topic coherence for different methods and datasets.", "We can observe that for the three datasets, TMKGE achieves highest topic coherence in almost all top word sizes.", "In the few cases which TMKGE does not rank highest, there only exist subtle differences with the top performing result.", "This shows that knowledge graph embedding improves the coherence of discovered topics.", "Further, for the top 10 words, the topic coherence of all three datasets are higher than those obtained by KGE-LDA.", "This shows that topic modeling based on HDP for both entity embedding and words enjoys incomparable advantages over LDA-based modeling.", "Table 2 shows example topics with their PMI scores learned from the three corpora by KGE-LDA and our TMKGE model.", "For comparison, we report similar topics to those listed in the KGE-LDA paper.", "It can be seen that TMKGE finds quite closely related words in a topic.", "For example, for the second column of 20 Newsgroups, topic words from both TMKGE and KGE-LDA are related to computers.", "However, it can be noted that words from TMKGE focus more on the core words of computer science.", "In contrast, words from the same topic in KGE-LDA seems to be closer to the brand, such as windows, mac or apple.", "In addition, topics found from TMKGE are more diverse than those found in KGE-LDA.", "For 20 Newsgroups, the three topics we list here refer to theology, computer science and middle east respectively while the three topics from KGE-LDA refer to internet, computer and car respectively.", "Both TMKGE and KGE-LDA discover probability-related and machine learning topics with different top words from NIPS dataset.", "Roughly speaking, KGE-LDA discovers gene-related, cancer-related and treatment-related topics from Ohsumed corpus.", "TMKGE discovers more diverse and more specific topics.", "For example, one topic TMKGE discovers is about Vietnamese veterans, cancer-related and sexual-disease topics.", "From the perspective of topic coherence, we can also see that TMKGE obtains higher PMI score in most of those topics.", "The whole trend is consistent with the average PMI score reported in the last section.", "Overall, TMKGE performs better than other topic models, including LDA, HDP and KGE-LDA in terms of average PMI and also in qualitative case studies.", "We evaluate our proposed method through document classification, we follow the approach in (Li and McCallum, 2006) for document classification.", "We have conducted a five-way classification on the comp subject of 20 Newsgroups dataset and on the top five most frequent labels of Ohsumed dataset (no labels for nips dataset), where each class of documents is divided into 75% training and 25% testing.", "For each class, the LDA, HDP and TMKGE models are trained on the training documents, and then the predictive likelihood for the test documents is calculated using the E-step in the variational inference procedure of LDA.", "A document is classified correctly if its corresponding model produces the highest likelihood.", "Table 3 presents the average classification accuracy for TMKGE, HDP and LDA over five repeated simulations.", "The table includes the classification accuracy for KGE-LDA, where the learned topic proportions are used as features for SVM classifier.", "For the majority of document classes, TMKGE has the best classification accuracy, except for the class mac .", "As shown, the SVM clas-sifier based on KGE-LDA has significantly worst performance.", "For more complete comparisons, we run experiments on all subjects of 20 Newsgroups and also report experimental results published in Shi et al. (2017) in Table 4.", "TMKGE achieves the best performance on all models.", "Firstly, it looks the addition of unnormalized knowledge graph embedding into TMKGE as a proportional vector to the word vector boosts the performance.", "Secondly, the selection of HDP over LDA plays an essential role.", "This can be indicated from the poor performance of KGE-LDA (which is even worse than BOW).", "More impressively, TMKGE achieves even much better performances than STE-Diff, TWE and TMSA, all of which involve the integration of word embedding and topic modeling.", "Impressively, TMKGE shows its supremacy over the state of the art model, TMSA with high margins.", "This shows that the knowledge graph structure included into the entity embedding conveys more information than pure word embedding.", "Meanwhile, this also shows that the two proportional vectors generated with online HDP enables the flexible sharing of information between words and entities.", "Accordingly, more coherent topics are extracted and the classification result are boosted as well.", "This paper presents TMKGE, a Bayesian nonparametric model based on hierarchical Dirichlet process for incorporation of entity embeddings from external knowledge graphs into topic modeling.", "The proposed method allows for flexible sharing of information between documents and knowledge graph.", "Specifically, TMKGE avoids forcing the words and entities to identical latent factors, thus making it a suitable framework for scenarios where only partial relational information are available.", "Furthermore, as a Bayesian nonparameteric model, TMKGE learns the number of word topics and entity mixture components automatically from the data.", "We have derived an efficient and scalable online variational inference for TMKGE.", "Comprehensive experiments on three different datasets suggest that TMKGE significantly outperforms SOA methods in terms of both topic coherence and document classification accuracy." ]
[ "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We present a framework for learning hierarchical policies from demonstrations, using sparse natural language annotations to guide the discovery of reusable skills for autonomous decision-making.", "We formulate a generative model of action sequences in which goals generate sequences of high-level subtask descriptions, and these descriptions generate sequences of low-level actions.", "We describe how to train this model using primarily unannotated demonstrations by parsing demonstrations into sequences of named high-level subtasks, using only a small number of seed annotations to ground language in action.", "In trained models, natural language commands index a combinatorial library of skills; agents can use these skills to plan by generating high-level instruction sequences tailored to novel goals.", "We evaluate this approach in the ALFRED household simulation environment, providing natural language annotations for only 10% of demonstrations.", "It achieves task completion rates comparable to state-of-the-art models (outperforming several recent methods with access to ground-truth plans during training and evaluation) while providing structured and human-readable high-level plans.", "1 1 Introduction Building autonomous agents that integrate high-level reasoning with low-level perception and control is a long-standing challenge in artificial intelligence (Fikes et al., 1972; Newell, 1973; Sacerdoti, 1973; Brockett, 1993).", "Fig. 1 shows an example: to accomplish a task such as cooking an egg , an agent must first find the egg , then grasp it , then locate a stove or microwave , at each step reasoning about both these subtasks and complex, unstructured sensor data.", "Hierarchical planning models (e.g. Sutton et al., 1999)which first reason about abstract 1 Code and visualizations: https://sites.google.com/ view/skill-induction-latent-lang/.", "states and actions, then ground these in concrete control decisionsplay a key role in most existing agent architectures.", "But training effective hierarchical models for general environments and goals remains difficult.", "Standard techniques either require detailed formal task specifications, limiting their applicability in complex and hard-to-formalize environments, or are restricted to extremely simple high-level actions, limiting their expressive power (Bacon et al., 2017; Sutton et al., 1999; Dietterich, 1999; Kaelbling and Lozano-Prez, 2011).", "Several recent papers have proposed to overcome these limitations using richer forms of supervision especially languageas a scaffold for hierarchical policy learning.", "In latent language policies (LLPs; Andreas et al., 2018), controllers first map 1713 from high-level goals to sequences of natural language instructions, then use instruction following models to translate those instructions into actions.", "But applications of language-based supervision for long-horizon policy learning have remained quite limited in scope.", "Current LLP training approaches treat language as a latent variable only during prediction, and require fully supervised (and often impractically large) datasets that align goal specifications with instructions and instructions with low-level actions.", "As a result, all existing work on language-based policy learning has focused on very short time horizons (Andreas et al., 2018), restricted language (Hu et al., 2019; Jacob et al., 2021) or synthetic training data (Shu et al., 2018; Jiang et al., 2019).", "In this paper, we show that it is possible to train language-based hierarchical policies that outperform state-of-the-art baselines using only minimal natural language supervision.", "We introduce a procedure for weakly and partially supervised training of LLPs using ungrounded text corpora, unlabeled demonstrations, and a small set of annotations linking the two.", "To do so, we model training demonstrations as generated by latent high-level plans: we describe a deep, structured latent variable model in which goals generate subtask descriptions and subtask descriptions generate actions.", "We show how to learn in this model by performing inference in the infinite, combinatorial space of latent plans while using a comparatively small set of annotated demonstrations to seed the learning process.", "Using an extremely reduced version of the ALFRED household robotics dataset (Shridhar et al., 2020)with 10% of labeled training instructions, no alignments during training, and no instructions at all during evaluationour approach performs comparably a state-of-the-art model that makes much stronger dataset-specific assumptions (Blukis et al., 2021), while outperforming several models (Zhang and Chai, 2021; Suglia et al., 2021; Kim et al., 2021) that use more information during both training and evaluation .", "Our method correctly segments and labels subtasks in unlabeled demonstrations, including subtasks that involve novel compositions of actions and objects.", "Additional experiments show that pretraining on large (ungrounded) text corpora (Raffel et al., 2020) contributes to this success, demonstrating one mechanism by which background knowledge encoded in language can benefit tasks that do not involve language as an input or an output.", "Indeed, our results show that relatively little information about language grounding is needed for effective learning of language-based policiesa rich model of natural language text, a large number of demonstrations, and a small number of annotations suffice for learning compositional libraries of skills and effective policies for deploying them.", "We consider learning problems in which agents must perform multi-step tasks (like cooking an egg ; Fig. 1) in interactive environments.", "We formalize these problems as undiscounted, episodic, partially observed Markov decision processes (POMDPs) defined by a tuple ( S , A , T, , O ) , where S is a set of states , A is a set of actions , T : S A S is an (unknown) state transition function , is a set of observations , and O : S is an (unknown) observation function .", "2 We assume that observations include a distinguished goal specification g that remains constant throughout an episode; given a dataset D of consisting of goals g and demonstrations d (i.e. D = { ( d 1 , g 1 ) , ( d 2 , g 2 ) . . . } ; d = [( o 1 , a 1 ) , ( o 2 , a 2 ) , . . . ]; o , a A ), we aim to learn a goal-conditional policy ( a t | a : t 1 , o : t , g ) = ( a t | a 1 , . . . , a t 1 , o 1 , . . . , o t , g ) that generalizes demonstrated behaviors to novel goals and states.", "For tasks like the ones depicted in Fig. 1, this learning problem requires agents to accomplish multiple subgoals (like finding an egg or operating an appliance ) in a feasible sequence.", "As in past work, we address this challenge by focusing on hierarchical policy representations that plan over temporal abstractions of low-level action sequences.", "We consider a generic class of hierarchical policies that first predict a sequence of subtask specifications from a distribution C ( i | : i 1 , g ) (the controller ), then from each generate a sequence of actions a 1 . . . a n from a distribution E ( a i | a : i 1 , o : i , ) (the executor ).", "3 At each timestep, E may either generate an action from A ; or a special termination signal STOP ; after STOP is selected, control is returned to C and a new is generated.", "This process is visualized 2 For notational convenience, we assume without loss of generality that T and O are deterministic.", "3 In past work, E often conditions on the current observation as well as goal and history of past subtask specifications; we found that this extra information was not needed for the tasks studied here.", "(a) Figure 2:", "(a) When a hierarchical policy is deployed, C generates a sequence of subtask specifications, and E translates each of these to a low-level action sequence ending in STOP .", "At training time, this hierarchical structure is not available, and must be inferred to train our model.", "To do so, we assign each action a i an auxiliary alignment variable i identifying the subtask that produced it.", "Alignments divide an action sequence into a sequence of segments s containing actions aligned to the same subtask.", "Automatically segmenting training demonstrations makes it possible to learn modular, reusable policies for individual subtasks without direct supervision.", "(b) Overview of the proposed learning algorithm (SL) 3 , which alternates between segmenting (by aligning) actions to fixed subtask specifications; labeling segments given fixed alignments, and updating model parameters.", "in Fig.", "2(a).", "Trajectories generated by hierarchical policies themselves have hierarchical structure: each subtask specification generates a segment of a trajectory (delimited by a STOP action) that accomplishes a specific subgoal.", "Training a hierarchical policy requires first defin-ing a space of subtask specifications , then parameterizing controller and executor policies that can generate these specifications appropriately.", "Most past research has either pre-defined an inventory of target skills and independently supervised C and E (Sutton et al., 1999; Kulkarni et al., 2016; Dayan and Hinton, 1992); or performed unsupervised discovery of a finite skill inventory using clustering techniques (Dietterich, 1999; Fox et al., 2017).", "Both methods have limitations, and recent work has explored methods for using richer supervision to guide discovery of skills that are more robust than human-specified ones and more generalizable than automatically discovered ones.", "One frequently proposed source of supervision is language: in latent language policies , C is trained to generate goal-relevant instructions in natural language, E is trained to follow instructions, and the space of abstract actions available for planning is in principle as structured and expressive as language itself.", "But current approaches to LLP training remain impractical, requiring large datasets of independent, fine-grained supervision for C and E .", "Below, we describe how to overcome this limitation, and instead learn from large collections of unlabeled demonstrations augmented with only a small amount of natural language supervision.", "Overview We train hierarchical policies on unannotated action sequences by inferring latent natural language descriptions of the subtasks they accomplish (Fig.", "2(b)).", "We present a learning algorithm that jointly partitions these action sequences into smaller segments exhibiting reusable, task-general skills, labels each segment with a description, trains C to generate subtask descriptions from goals, and E to generate actions from subtask descriptions.", "Formally, we assume access to two kinds of training data: a large collection of unannotated demonstrations D = { ( d 1 , g 1 ) , ( d 2 , g 2 ) , . . . } and a smaller collection of annotated demonstrations D ann = { ( d 1 , g 1 , 1 ) , ( d 2 , g 2 , 2 ) , . . . } where each consists of a sequence of natural language instructions [ 1 , 2 , . . . ] corresponding to the subtask sequence that should be generated by C .", "We assume that even annotated trajectories leave much of the structure depicted in Fig.", "2(a) unspecified, containing no explicit segmentations or STOP markers.", "(The number of instructions | | will in general be smaller than the number of actions | d | .)", "Training E requires inferring the correspondence between actions and annotations on D ann while inferring annotations themselves on D .", "Training objective To begin, it will be convenient to have an explicit expression for the probability of a demonstration given a policy ( C , E ) .", "To do so, we first observe that the hierarchical generation procedure depicted in Fig.", "2(a) produces a latent alignment between each action and the subtask 1715 that generated it.", "We denote these alignments , writing i = j to indicate that a i was generated from j .", "Because C executes subtasks in sequence, alignments are monotonic , satisfying i = i 1 or i = i 1 + 1 .", "Let seg( ) denote the segmentation associated with , the sequence of sequences of action indices [[ i : i = 1] , [ i : i = 2] , . . . ] aligned to the same instruction (see Fig.", "2(a)).", "Then, for a fixed policy and POMDP, we may write the joint probability of a demonstration, goal, annotation, and alignment as: p ( d , g, , ) (cid:89) s seg( ) (cid:20) C ( s | < s , g ) (cid:16) (cid:89) i 1", "..", "| s | E ( a i | a s : i 1 , o s : i , i ) (cid:17) E ( STOP | a s , o s ) (cid:21) .", "(1) Here < s (in a slight abuse of notation) denotes all segments preceding s , and s i is the index of the i th action in s .", "The constant of proportionality in Eq.", "(1) depends only on terms involving T ( s (cid:48) | s, a ) , O ( o | s ) and p ( g ) , all independent of C or E ; Eq.", "(1) thus describes the component of the data likelihood under the agent's control (Ziebart et al., 2013).", "With this definition, and given D and D ann as defined above, we may train a latent language policy using partial natural language annotations via ordinary maximum likelihood estimation, imputing the missing segmentations and labels in the training set jointly with the parameters of C and E (which we denote ) in the combined annotated and unannotated likelihoods: arg max , , L ( , , ) + L ann ( , ) (2) where L ( , , ) = (cid:88) ( d ,g ) D log p ( d , g, , ) (3) L ann ( , ) = (cid:88) ( d ,g, ) D ann log p ( d , g, , ) (4) and where we have suppressed the dependence of p ( d , g, , ) on for clarity.", "This objective involves continuous parameters , discrete alignments , and discrete labelings .", "We optimize it via block coordinate ascent on each of these components in turn: alternating between resegmenting demonstrations, relabeling those without ground-truth labels, and updating parameters .", "The full learning algorithm, which we refer to as (SL) 3 ( semi-supervised skill learning with latent language ), is shown in Algorithm 1, with each step of the optimization procedure described in more detail below.", "The segmentation step associates each low-level action with a high-level subtask by finding the highest scoring alignment sequence for each demonstration in D and D ann .", "While the number of possible alignments for a single demonstration is exponential in demonstration length, the assumption that E depends only on the current subtask implies the following recurrence relation: max 1: n p ( d 1: n , g, 1: m , 1: n ) = max i (cid:16) max 1: i p ( d 1: i , g, 1: m 1 , 1: i ) p ( d i +1: n , g, m , i +1: n = m ) (cid:17) (5) This means that the highest-scoring segmentation can be computed by an algorithm that recursively identifies the highest-scoring alignment to each pre-fix of the instruction sequence at each action (Al-gorithm 2), a process requiring O ( | d || | ) space and O ( | d | 2 | | ) time.", "The structure of this dynamic program is identical to the forward algorithm for hidden semi-Markov models (HSMMs), which are widely used in NLP for tasks like language generation and word alignment (Wiseman et al., 2018).", "Indeed, Algorithm 2 can be derived immediately from Eq.", "(1) by interpreting p ( d , g, , ) as the output distribution for an HSMM in which emissions are actions, hidden states are alignments, the emission distribution is E and the transition distribution is the deterministic distribution with p ( + 1 | ) = 1 .", "This segmentation procedure does not produce meaningful subtask boundaries until an initial executor policy has been trained.", "Thus, during the first iteration of training, we estimate a segmentation by by fitting a 3-state hidden Markov model to training action sequences using the BaumWelch algorithm (Baum et al., 1970), and mark state transitions as segment boundaries.", "Details about the initialization step may be found in Appendix B. 1716 Algorithm 1: (SL) 3 : Semi-Supervised Skill Learning with Latent Language Input : Unannotated demonstrations D = { ( d 1 , g 1 ) , ( d 2 , g 2 ) , . . . } ; Annotated demonstrations D ann = { ( d 1 , g 1 , 1 ) , ( d 2 , g 2 , 2 ) , . . . } Output : Inferred alignments , labels , and parameters for C and E .", "// Segmentation // Infer alignments between actions and subtasks.", "if t = 1 then Initialize using the BaumWelch algorithm (Baum et al., 1970) else arg max L ( , , ) + L ann ( , ) [Algorithm 2].", "end // Labeling // Infer subtask labels for unannotated demos D .", "Output : Maximum a posteriori alignments .", "from scores via back-tracing (Rabiner, 1989).", "arg max ( , , ) Inference of latent, language-based plan descriptions in unannotated demonstrations involves an intractable search over string-valued .", "To approximate this search tractably, we used a learned, amortized inference procedure (Wainwright and Jordan, 2008; Hoffman et al., 2013; Kingma and Welling, 2014) to impute descriptions given fixed segmentations.", "During each parameter update step (described below), we train an inference model q ( | a s ( i ) , a s ( i +1) , g ) to approximate the posterior distribution over descriptions for a given segment given a goal, the segment's actions, and the actions from the subsequent segment.", "4 Then, during the labeling step, we label complete demonstrations by choosing the highest-scoring instruction for each trajectory independently: arg max log p ( d , g, , ) (cid:104) arg max q ( | a s ( i ) , a s ( i +1) , g ) (cid:12)(cid:12)(cid:12) s ( i ) seg( ) (cid:105) (6) Labeling is performed only for demonstrations in D , leaving the labels for D ann fixed during training.", "Param update: arg max L ( , , )+ L ann ( , ) This is the simplest of the three update steps: given fixed instructions and alignments, and E , C parameterized as neural networks, this objective is differentiable end-to-end.", "In each iteration, we train these to convergence (optimization details are described in Section 4 and Appendix C).", "During the parameter update step, we also fit parameters of the proposal model to maximize the likelihood (cid:80) d (cid:80) s , log q ( | a s , o s ) with respect to the current segmentations s and labels .", "As goals, subtask indicators, and actions may all be encoded as natural language strings, C and E may be implemented as conditional language models.", "As described below, we initialize both policies with models pretrained on a large text corpora.", "Our experiments aim to answer two questions.", "First, does the latent-language policy representation described in Section 3 improve downstream performance on complex tasks?", "Second, how many natural language annotations are needed to train 4 In our experiments, conditioning on observations or longer context did not improve the accuracy of this model.", "Environment We investigate these questions in the ALFRED environment of Shridhar et al. (2020).", "ALFRED consists of a set of interactive simulated households containing a total of 120 rooms, accompanied by a dataset of 8,055 expert task demonstrations for an embodied agent annotated with 25,743 English-language instructions.", "Observations o are bitmap images from a forward-facing camera, and actions a are drawn from a set of 12 low-level navigation and manipulation primitives.", "Manipulation actions (7 of the 12) additionally require predicting a mask over the visual input to select an object for interaction.", "See Shridhar et al. (2020) for details.", "While the ALFRED environment is typically used to evaluate instruction following models, which map from detailed, step-by-step natural language descriptions to action sequences (Shridhar et al., 2020; Singh et al., 2020; Corona et al., 2021), our experiments focus on an goal-only evaluation in which agents are given goals (but not fine-grained instructions) at test time.", "Several previous studies have also considered goal-only evaluation for ALFRED, but most use extremely fine-grained supervision at training time , including full supervision of symbolic plan representations and their alignments to demonstrations (Min et al., 2021; Zhang and Chai, 2021), or derived sub-task segmentations using ALFRED-specific rules (Blukis et al., 2021).", "In contrast, our approach supports learning from partial, language-based annotations without segmentations or alignments, and this data condition is the main focus of our evaluation.", "Modeling details C and E are implemented as sequence-to-sequence transformer networks (Vaswani et al., 2017).", "C , which maps from text-based goal specifications to text-based instruction sequences, is initialized with a pre-trained T5-small language model (Raffel et al., 2020).", "E , which maps from (textual) instructions and (image-based) observations to (textual) actions and (image-based) object selection masks is also initialized with T5-small ; to incorporate visual input, this model first embeds observations using a pretrained ResNet18 model (He et al., 2016) and transforms these linearly to the same dimensionality as the word embedding layer.", "Details about the architecture of C and E may be found in Appendix C. Model variants for exploration In ALFRED, navigation in the goal-only condition requires exploration of the environment, but no exploration is demonstrated in training data, and techniques other than imitation learning are required for this specific skill.", "To reflect this, we replace all annotations containing detailed navigation instructions go to the glass on the table to your left with generic ones find a glass .", "Examples and details of how navigation instructions are modified can be found in Appendix E and Fig. 7.", "The ordinary (SL) 3 model described above is trained on these abstracted instructions.", "A key advantage of (SL) 3 is modularity: individual skills may be independently supervised or reimplemented.", "To further improve (SL) 3 's navigation capabilities, we introduce two model variants in which sub-task specifications beginning Find.", ".", ". are executed by a either a planner with ground-truth environment information or a specialized navigation module from the HLSM model (Blukis et al., 2021) rather than E .", "Outside of navigation, these models preserve the architecture and training procedure of (SL) 3 , and are labeled (SL) 3 +planner and (SL) 3 +HLSM in experiments below.", "seq2seq : A standard (non-hierarchical) goal-conditioned policy, trained on the ( g, d ) pairs in D D ann to maximize (cid:80) a , o ,g log ( a | o , g ) , with parameterized similar to E .", "seq2seq2seq : A supervised hierarchical policy with the same architectures for C and E as in (SL) 3 , but with C trained to generate subtask sequences by maximizing (cid:80) ,g log C ( | g ) and E trained to maximize (cid:80) a , o , ,g log E ( a | o , , g ) using only D ann .", "Because E maps from complete task sequences to complete low-level action sequences, training of this model involves no explicit alignment or segmentation steps.", "no-pretrain , no-latent : Ablations of the full (SL) 3 model in which C and E are, respectively, randomly initialized or updated only on L ann ( , ) during the parameter update phase.", "We additionally contextualize our approach by comparing it to several state-of-the-art models for the instruction following task in ALFRED: S+ (Shridhar et al., 2020), MOCA (Singh et al., 2020), Modular (Corona et al., 2021), HiTUT (Zhang and Chai, 2021), ABP (Kim et al., 2021), ET (Pashe-vich et al., 2021), EmBERT (Suglia et al., 2021), and FILM (Min et al., 2021).", "Like seq2seq , these 1718 are neural sequence-to-sequence models trained to map instructions to actions; they incorporate several standard modeling improvements from the instruction following literature, including progress monitoring (Ma et al., 2019) and pretrained object recognizers (Singh et al., 2020).", "Many of these models are trained with stronger supervision than (SL) 3 , including instructions and alignments during training, and ground truth instructions during evaluation ; see Table 3 for details.", "Evaluation Following Shridhar et al. (2020), Table", "1(a) computes the online , subtask-level accuracy of each policy, and Table", "1(b) computes the end-to-end success rate of each policy.", "See the ALFRED paper for details of these evaluations.", "For data-efficiency experiments involving a large number of policy variants (Table 2, Fig. 4), we instead use an offline evaluation in which we measure the fraction of subtasks in which a policy's predicted actions (ignoring object selection masks) exactly match the ground truth action sequence.", "Table 1 compares (SL) 3 with flat and hierarchical imitation learning baselines.", "The table includes two versions of the model: a 100% model trained with full instruction supervision ( |D| = 0 , |D ann | = 21000 ) and a 10% model trained with only a small fraction of labeled demonstrations ( |D| = 19000 , |D ann | = 2000 ).", "seq2seq and seq2seq2seq models are always trained with 100% of natural language annotations.", "Results are shown in Table 1.", "We find: (SL) 3 improves on flat policies: In both the 10% and 100% conditions, it improves over the subtask completion rate of the seq2seq (goals-to-actions) model by 25%.", "When either planneror mapping-based navigation is used in conjunction with (SL) 3 , it achieves end-to-end performance comparable to the HLSM method, which relies on similar supervision.", "Strikingly, it outperforms several recent methods with access to even more detailed information at training or evaluation time.", "Language-based policies can be trained with sparse natural language annotations: Performance of (SL) 3 trained with 10% and 100% natural language annotations is similar (and in both cases superior to seq2seq and seq2seq2seq trained on 100% of data).", "Appendix Fig. 4 shows more detailed supervision curves.", "Ablation experiments in Table 2 show that inference of latent training plans is important for this result: with no inference of", "(a) Online subtask success rate for (SL) 3 and baselines Model A vg C l e an C oo l H e a t P i ck P u t S li ce T ogg l e G o T o (SL) 3 (10%) 50 56 75 74 50 48 54 32 13 (SL) 3 (100%) 53 68 82 75 50 45 55 32 15 seq2seq 25 16 33 64 20 15 25 13 14 seq2seq2seq 39 15 69 58 29 42 50 32 15", "(b) End-to-end task success rates for (SL) 3 and other models.", "latent instructions (i.e. training only on annotated demonstrations), performance drops from 56% to 52%.", "Fig. 3 shows an example of the structure inferred for an unannotated trajectory: the model inserts reasonable segment boundaries and accurately labels each step.", "Language model pretraining improves automated decision-making.", "Ablation experiments in Table 2 provide details.", "Language model pretraining of C and E (on ungrounded text) is crucial for good performance in the low-data regime: with 10% of annotations, models trained from scratch complete 49% of tasks (vs 56% for pretrained mod-els).", "We attribute this result in part to the fact that pretrained language models encode information about the common-sense structure of plans, e.g. the fact that slicing a tomato first requires finding a knife .", "Such models are well-positioned to adapt to planning problems that require modeling relations between natural language strings.", "These 1719 Figure 3: Example of an inferred segmentation and labeling for an unannotated trajectory.", "experiments point to a potentially broad role for pretrained language models in tasks that do not involve language as an input or an output.", "One especially interesting consequence of the use of language-based skills is our model's ability to produce high-level plans for out-of-distribution goals , featuring objects or actions that are not part of the ALFRED dataset at all.", "Examples are provided in Fig. 5 and discussed in Appendix A. While additional modeling work is needed to generate low-level actions for these high-level plans, they point to generalization as a key differentiator between latent language policies and ordinary hierarchical ones.", "Our approach draws on a large body of research at the intersection of natural language processing, representation learning, and autonomous control.", "representation The use of natural language annotations to scaffold learning, especially in computer vision and program synthesis applications, has been the subject of a number of previous studies (Brana-van et al., 2009; Frome et al., 2013; Andreas et al., 2018; Wong et al., 2021).", "Here, we use language to support policy learning, specifically by using natural language instructions to discover compositional subtask abstractions that can support autonomous control.", "Our approach is closely related to previous work on learning skill libraries from policy sketches (Andreas et al., 2017; Shiarlis et al., 2018); instead of the fixed skill inventory used by policy sketches, (SL) 3 learns an open-ended, compositional library of behaviors indexed by natural language strings.", "Hierarchical policies Hierarchical policy learning and temporal abstraction have been major areas of focus since the earliest research on reinforcement learning and imitation learning (McGovern and Barto, 2001; Konidaris et al., 2012; Daniel et al., 2012).", "Past work typically relies on direct supervision or manual specification of the space of high-level skills (Sutton et al., 1999; Kulkarni et al., 2016) or fully unsupervised skill discovery (Dietterich, 1999; Bacon et al., 2017).", "Our approach uses policy architectures from this literature, but aims to provide a mechanism for supervision that allows fine-grained control over the space of learned skills (as in fully supervised approaches) while requiring only small amounts of easy-to-gather human supervision.", "intersection of language and control include instruction following (Chen and Mooney, 2011; Branavan et al., 2009; Tellex et al., 2011; Anderson et al., 2018; Misra et al., 2017), embodied question answering (Das et al., 2018; Gordon et al., 2018) and dialog tasks (Tellex et al., 2020).", "As in our work, representations of language learned from large text corpora facilitate grounded language learning (Shridhar et al., 2021), and interaction with the environment can in turn improve the accuracy of language generation (Zellers et al., 2021); future work might extend our framework for semi-supervised inference of plan descriptions to these settings as well.", "We have presented (SL) 3 , a framework for learning hierarchical policies from demonstrations sparsely annotated with natural language descriptions.", "Using these annotations, (SL) 3 infers the latent structure of unannotated demonstrations, automatically segmenting them into subtasks and labeling each subtask with a compositional description.", "Learning yields a hierarchical policy in which natural language serves as an abstract representation of subgoals and plans: a controller sub-policy maps from goals to natural language plan specifications, and a modular executor that maps each component of the plan to a sequence of low-level actions.", "In simulated household environments, this model can complete abstract goals (like slice a tomato ) with accuracy comparable to state-of-the-art models trained and evaluated with fine-grained plans ( find a knife , carry the knife to the tomato , . . . ).", "While our evaluation has focused on household robotics tasks, the hierarchical structure inferred by (SL) 3 is present in a variety of learning problems, including image understanding, program synthesis, and language generation.", "In all those domains, generalized versions of (SL) 3 might offer a framework for building high-quality models using only a small amount of rich natural language supervision.", "We would like to thank Valts Blukis and Shikhar Murty for helpful discussions.", "Also thanks to Joe O' Connor, Gabe Grand and the anonymous reviewers for their feedback on an early draft of the paper." ]
[ "method", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "result", "other", "objective", "abstain", "result", "method", "other", "other", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "other" ]
[ "We study the problem of Event Causality Identification (ECI) to detect causal relation between event mention pairs in text.", "Although deep learning models have recently shown state-of-the-art performance for ECI, they are limited to the intra-sentence setting where event mention pairs are presented in the same sentences.", "This work addresses this issue by developing a novel deep learning model for document-level ECI (DECI) to accept inter-sentence event mention pairs.", "As such, we propose a graph-based model that constructs interaction graphs to capture relevant connections between important objects for DECI in input documents.", "Such interaction graphs are then consumed by graph convolutional networks to learn document context-augmented representations for causality prediction between events.", "Various information sources are introduced to enrich the interaction graphs for DECI, featuring discourse, syntax, and semantic information.", "Our extensive experiments show that the proposed model achieves state-of-the-art performance on two benchmark datasets.", "Event Causality Identification (ECI) is an important problem in Information Extraction that seeks to predict causal relation between a pair of events mentioned in text.", "For instance, in the sentence The building was nearly destroyed by a fire early Tuesday morning. , an ECI system should be able to recognize the causal relation between the two events triggered by destroyed and fire", "(called event mentions), i.e., fire cause destroyed .", "ECI finds its applications for a wide range of problems in natural language processing", "(NLP), including machine reading comprehension", "(Berant et al., 2014), future event forecasting", "(Hashimoto, 2019), and why-question answering", "(Oh et al., 2016).", "dead and 249 people injured\", adding that more than 300 buildings have been damaged 1 due to the quake 1 . Rescuers and other assistance teams have arrived in Bener Meriah, while the air force have dispatched a helicopter and a CN-235 aircraft to the region, Nugroho said. \"We are now concentrating on searching for people who may be trapped under the rubble,\" said Rusli M . Saleh, the deputy district chief of Bener Meriah. He said at least 25 of the injured in his districtwere hospitalized in intensivecare.As the quake hit,villagers in thearearan outof theirhomes in panic and screamed for help. \"I see many houses were damaged 2 and their roofs fell onto some people,\" Bensu Elianita , a 22-year-old resident of Bukit Sama village in Central Aceh district, said shortly after the quake 2 hit . coreference", "Ning et al., 2018; Gao et al., 2019)", "while the recent approach has examined deep learning methods to deliver state-of-the-art performance for this task", "(Kadowaki et al., 2019; Liu et al., 2020).", "Despite the good performance, the existing deep learning methods for ECI are limited in that they only model the context at the sentence level, assuming the event mention pairs of interest to be in the same sentences", "(i.e., intra-sentence setting).", "On the one hand, this assumption fails to cover the inter-sentence scenario where the input pairs of event mentions can appear in different sentences in the documents, e.g., in the recent EventStoryLine dataset for ECI", "(Caselli and Vossen, 2017).", "On the other hand, the sole modeling of sentence context cannot benefit from the document-level information that can provide useful evidence to facilitate the causality prediction for events.", "An example can be seen in Figure 1 where the interested pair of event mentions involves damaged 2 and quake 2 in the last", "(green)", "sentence.", "A system that only considers sentence context might find it challenging to predict causal relation in this case due to the long distance and the appearance of many irrelevant words between damaged 2 and quake 2 in the sentence.", "However, if a system relies on document-level information and recognizes the coreference of the event mention pairs", "( damaged 2 , damaged 1 )", "and", "( quake 2 , quake 1 ), it can exploit the clear evidence of damaged 1 due to the quake 1 to infer the causal relation for damaged 2 and quake 2 .", "To fill this gap, this work aims to develop a deep learning model for document-level ECI", "(DECI)", "where input event mentions can reside in different sentences of an input document.", "As such, a major challenge in modeling document-level context with deep learning involves capturing necessary interac-tions/connections between relevant objects for ECI.", "For instance, in our example in Figure 1, relevant objects include the event mentions and the important context words", "(i.e., due to )", "while necessary connections involve event coreference and interactions of event mentions with context words", "(i.e., between damaged 1 , quake 1 , and due to ).", "Motivated by this intuition, we design the graph-based model for DECI where interaction graphs over relevant objects for documents are explicitly generated and consumed by Graph Convolutional Networks", "(GCN)", "(Kipf and Welling, 2017; Nguyen and Gr-ishman, 2018)", "to induce representation vectors for prediction.", "To our knowledge, this is the first work that employs interaction graphs for documents and GCNs for ECI.", "How can interaction graphs for documents", "(i.e., nodes and edges)", "be formed to learn effective representation vectors for ECI?", "First, the intuitive approach to design nodes for interaction graphs is to leverage relevant objects for ECI in documents.", "Accordingly, we employ all the words, event mentions and entity mentions in a document to establish nodes for its interaction graph.", "Here, we note that entity mentions", "(e.g., names, pronouns, nominals)", "might also be helpful for ECI as entity mentions can serve as arguments", "(participants)", "of events and events with the same arguments might have better chance to involve in the causal relation.", "Second, for edges of interaction graphs, we propose to exploit different knowledge sources or information types to create different types of connections for the graph nodes.", "Such connection types are then combined to produce a single rich interaction graph for an input document for representation learning in ECI.", "In particular, we focus on three major types of information for node connections for ECI in this work, i.e., discourse-based, syntax-based, and semantic-based information.", "As such, the discourse-based information explores the sentence boundary and coreference of entity/event mentions in documents to link the nodes in interaction graphs", "(motivated by our example in Figure 1).", "The syntax-based information connects words based on their syntactic relations in dependency trees of sentences, suggested by the use of shortest dependency paths between event mentions as features for ECI in prior work", "(Gao et al., 2019).", "In contrast, the intuition for semantic-based information is that semantically related words/entity/event mentions in documents can also provide useful evidences to infer the causal relation for events.", "For instance, consider the following sentence: The violence in and near the Yida refugee camp, located 10 miles south of the border, came one day after bombings were reported in another region of South Sudan, an attack that provoked strong condemnation from the U.S. State Department. Here, the causal relation between attack and condemnation can be easily predicted due to the direct evidence in the context", "(i.e., via provoked ).", "However, the more complicated and implicit context between bombings and condemnation would make it more difficult for ECI systems to realize the causality in this case.", "Fortunately, the systems can combine the causal relation between attack and condemnation and the close semantic similarity between the two events bombings and attack to facilitate the causality prediction between bombings and condemnation .", "Finally, we propose a novel mechanism to regularize interaction graphs and representation vectors to further improve the representation learning for DECI.", "As such, we aim to constrain the model so edges with small weights in the generated graphs have minimized contribution to representation vectors.", "In this way, we expect the model to be more robust against irrelevant/noisy edges in the graphs and still promote useful edges for representation learning.", "We conduct extensive experiments on two datasets for DECI.", "The results demonstrate the effectiveness of the proposed model and lead to state-of-the-art performance for DECI.", "We formulate DECI as a binary classification problem.", "The input to the models include a document D = w 1 , w 2 , . . . , w N", "(of N words/tokens)", "that can have multiple sentences, and two event mentions of interest e s and e t in D .", "The goal of DECI is to predict whether there exists a causal relation between e s and e t in D .", "Our model for DECI involves three major components:", "(i)", "Document Encoder to transform the words into representation vectors,", "(ii)", "Structure Generation to generate an interaction graph for D , and", "(iii)", "Representation Regularization to regularize the representation vectors.", "We provide details for these components below.", "In the first step, we transform each word w i D into a representation vector x i using the contextualized embeddings BERT", "(Devlin et al., 2019)", "(i.e., the BERT base version).", "In particular, as BERT might split w i into several word-pieces, we employ the average of the hidden vectors for the word-pieces of w i in the last layer of BERT as the representation vector x i for w i .", "To handle long documents with BERT, we divide D into segments of 512 word-pieces to be encoded separately.", "The resulting sequence X = x 1 , x 2 , . . . , x n for D is then sent to the next steps for further computation.", "The goal of this section is to generate an interaction graph G = {N , E} for D to facilitate representation learning for DECI.", "As such, the nodes and edges in G for our DECI problem are constructed as follows: Nodes : The node set N for our interaction graph G should capture relevant objects for the causal prediction between the two event mentions of interest e s and e t in D .", "As motivated earlier, we consider all the context words w i , event mentions, and entity mentions in D as relevant objects for our DECI problem.", "Formally, let E = { e 1 , e 2 , . . . , e | E | } and M = { m 1 , m 2 , . . . , m | M | } be the sets of event mentions and entity mentions in D respectively", "(i.e., e s , e t E ).", "The node set N for G is thus formed by the union of D , E , and M : N = D E M = { n 1 , n 2 , . . . , n |N| } .", "In this work, we use the provided event mentions in the datasets for E , following prior work on DECI", "(Gao et al., 2019)", "while the Stanford CoreNLP toolkit is employed to obtain the entity mentions M .", "Edges : To formally represent the edges between the nodes in N for G , we use the adjacency matrix A = { a ij } i,j = |N|", "( a ij R ).", "Here, as we aim to use A as the input for Graph Convolutional Networks", "(GCN)", "to learn representation vectors for DECI, the value/score a ij between two nodes n i and n j in N is expected to estimate the importance", "(or the level of interaction)", "of n j for the representation computation of n i .", "In this way, n i and n j can directly interact and influence the representation computation for each other even if they are sequentially far away from each other in D .", "As presented in the introduction, three types of information are exploited to design the edges E", "(or compute the interaction scores a ij )", "for G in our model, including the discourse-based, syntax-based and semantic-based information.", "Discourse-based Edges : As the input document D can involve multiple sentences and event/entity mentions, understanding where they span and how they relate to each other is crucial to effectively encode the document for DECI.", "As such, we propose to exploit three types of discourse information to obtain the interaction graph G for D , i.e., the sentence boundary, the coreference structure, and the mention span for event/entity mentions in D .", "Sentence Boundary : The intuition for this type of information is that two event/entity mentions appearing in the same sentences tend to be more contextually related to each other than those in different sentences.", "This suggests the better usefulness of event/entity mentions in the same sentences for the representation computation of each other.", "To capture this intuition, we propose to compute the sentence boundary-based interaction score a sentij for the nodes n i and n j in N where a sentij = 1 if n i and n j are the event/entity mentions of the same sentences in D", "(i.e., n i , n j E M ); and 0 otherwise.", "a sentij will be used as an input to compute the overall interaction score a ij for G later.", "Coreference Structure : Instead of considering within-sentence information as in a sentij , coreference structure concerns the connection of event and entity mentions across sentences to enrich their representations with the contextual information of the coreferring ones", "(illustrated in Figure 1).", "As such, to enable the interaction of representations for coreferring event/enity mentions, we compute the conference-based score a coref ij for each pair of nodes n i and n j to contribute to the overall score a ij for representation learning.", "Here, a corefij is set to 1 if n i and n j are coreferring event/entity mentions in D , and 0 otherwise.", "Note that we use the Stanford CoreNLP toolkit to determine the coreference of entity mentions while similar to", "(Gao et al., 2019), golden event coreference information in the DECI datasets is utilized in this work.", "Mention Span : The sentence boundary and coreference structure scores only model interactions of event and entity mentions in D based on discourse information.", "To further connect event and entity mentions with context words w i for representation learning, we employ the mention span-based interaction score a spanij as another input for a ij , where a spanij is only set to 1", "(i.e., 0 otherwise)", "if n i is a word", "( n i D )", "in the span of the entity/event mention n j", "( n j E M )", "or vice verse.", "Note that a spanij is important as it allows representation vectors for event/entity mentions to be grounded on the contextual information in D .", "Syntax-based Edges : Prior work has leveraged dependency parsing trees of sentences in documents as an useful source of information to generate features for DECI systems, e.g., using the shortest dependency paths between the two event mentions of interest", "(Gao et al., 2019).", "As such, we expect the dependency trees of the sentences in D can also provide beneficial information to connect the nodes in N to learn effective representation vectors for DECI.", "To this end, we propose to employ the dependency relations/connections between the words in D to obtain a syntax-based interaction score a depij for each pair of nodes n i and n j in N , serving as an additional input for a ij .", "In particular, directly inheriting the graph structures of the dependency trees of the sentences in D , we set a depij to 1 if n i and n j are two words in the same sentence", "(i.e., n i , n j D )", "and they are connected to each other in the corresponding dependency tree, and 0 otherwise.", "Thus, two words are considered important to each other for representation learning in DECI if they are neighbors in the dependency trees 1 .", "Semantic-based Edges : This information exploits the semantic similarity of the nodes in N to enrich the overall interaction scores a ij for G .", "The motivation is that a node n i would contribute more to the representation vector of another node n j for DECI if n i is more semantically related to n j", "(illustrated in the introduction).", "To this end, we propose two complementary methods to compute the semantic similarity between the nodes for a ij based on context-based and knowledge-based information.", "Context-based Semantic : In this method, we seek to first obtain a representation vector v i for the semantic of each node n i in N based on its context in D .", "The context-based semantic similarity a contextij for the nodes is then be computed via such representation vectors and fed into the estimation of the overall interaction score a ij .", "In particular, the context-based representation vector v i for a word node n i D is directly inherited from the contextualized embedding vector x c X 1 We use Stanford CoreNLP to parse the sentences.", "(i.e., v i = x c )", "of the corresponding word w c for n i .", "In contrast, for event and entity mentions, their representation vectors are computed by max-pooling the contextualized embedding vectors in X that correspond to the words in the event/entity men-tions' spans.", "Eventually, the context-based similarity score a contextij for two nodes n i and n j in N is obtained via the normalized score: k i = U k v i , q i = U q v i a contextij = exp( k i q j )", "where U k and U q are trainable weight matrices, and the biases are omitted for brevity in this work.", "Knowledge-based Semantic : Instead of using contextual information, this method leverages the external knowledge of the nodes from knowledge bases to capture their semantic for node similarity computation.", "We expect the external knowledge for the nodes to provide complementary information for the contextual information in D , thus further enriching the semantic similarity scores", "(and overall interaction scores a ij )", "for the nodes in N .", "To this end, we propose to utilize WordNet", "(Miller, 1995), a rich knowledge base for word meanings, to obtain external knowledge for the words in D .", "As such, WordNet involves a network of word meanings", "(i.e., synsets)", "that are connected to each other via various semantic relations", "(e.g., synonyms, hyponyms).", "Our first step to generate knowledge-based similarity scores involves mapping each word node n i D N to a synset node M i in WordNet using a Word Sense Disambiguation", "(WSD)", "tool.", "In particular, we employ WordNet 3.0 and the state-of-the-art BERT-based WSD model in", "(Blevins and Zettlemoyer, 2020)", "to perform the word-synset mapping in this work.", "Afterward, we compute a knowledge-based similarity score a struct ij for each pair of word nodes n i and n j in D N using the structure-based similarity of their linked synsets M i and M j in WordNet", "(i.e., a structij = 0 if either n i or n j is not a word node in D N ).", "Accordingly, the Lin similarity measure", "(Lin et al., 1998)", "for synset nodes in WordNet is utilized for this purpose: a structij = 2 IC", "( LCS", "( M i ,M j ))", "IC", "( M i )+ IC", "( M j )", ", where IC and LCS amount to the information content of synset nodes and the least common subsumer of two synsets in the WordNet hierarchy", "(the most specific ancestor node)", "respectively 2 .", "Structure Combination : Up to now, six scores have been generated to capture the level of interactions in representation learning for each pair of nodes n i and n j in N according to different information sources", "(i.e., a sentij , a corefij , a spanij , a depij , a contextij and a structij ).", "For convenience, we group the six scores for each node pair n i and n j into a vector d ij = [ a sentij , a corefij , a spanij , a depij , a contextij , a structij ] of size 6.", "To unify the scores in d ij to form an overall rich interaction score a ij for n i and n j in G , we use the following normalization: a ij = exp( d ij q T )", "As mentioned above, given the combined interaction graph G with the adjacency matrix A = { a ij } i,j = |N| , we use GCNs to induce representation vectors for the nodes in N for DECI.", "In particular, the GCN model in our work takes the context-based representation vectors v i of the nodes n i N as the input.", "For convenience, we organize v i into rows of the input matrix H 0 = [ v 1 , . . . , v |N| ] .", "The GCN model then involves G layers that generate the matrix H l at the l -th layer for the nodes in N", "( 1 l G )", "via: H l = ReLU", "( AH l 1 W l )", "( W l is the weight matrix for the l -th layer).", "The output of the GCN model after G layers is HG whose rows are denoted by HG = [ h 1 , . . . , h |N| ] , serving as more abstract representation vectors for the nodes n i for causality prediction.", "This GCN-based computation of HL is written as HG = [ h 1 , . . . , h |N| ] = GCN", "( H 0 , A, G )", "for convenience.", "Our model so far renders G as a fully connected graph for representation learning whose edge weights are induced and recorded in the adjacency matrix A = { a ij } i,j =1", "..", "|N|", "( 0 < a ij < 1 ).", "However, it is intuitive that not all the edges in G are relevant/necessary for the representation vectors in DECI.", "Some edges might even introduce noisy information if they are preserved in the graph.", "As such, we hypothesize that edges with small weights/scores assigned by the learning process in A are mostly noisy edges and should have minimal contribution to the induced representation vectors.", "To this end, we propose to obtain a sparser version G", "(cid:48)", "of G where edges with small weights are completely eliminated.", "In particular, we employ a threshold", "( 0 < < 1 )", "and compute the adjacency matrix A", "(cid:48)", "= { a", "(cid:48)", "ij } i,j =1", "..", "|N| for G", "(cid:48)", "via: a", "(cid:48)", "ij = a ij if a ij > ; and 0 otherwise.", "To explicitly encourage the minimal contribution of small-weight edges, our goal is to enforce that the representation vectors learned by the sparse graph G", "(cid:48)", "are still close to those learned by the full graph G", "(i.e., the removal of small-weight edges in G", "(cid:48)", "does not have much effect on representation learning).", "To implement this idea, we first apply our GCN model over the sparse graph G", "(cid:48)", "to learn another version of GCN-based representation vectors for the nodes in N : H", "(cid:48)", "G = [ h", "(cid:48)", "1 , . . . , h", "(cid:48)|N| ] = GCN", "( H 0 , A", "(cid:48)", ", G", "(cid:48)", ")", ".", "Afterward, we seek to minimize the difference L reg between representation vectors of corresponding nodes in HG and H", "(cid:48)", "G in the overall loss function: L reg = 1 / |N |", "(cid:80)", "|N| || h i h", "(cid:48)", "i || 22 .", "i =1", "..", "Finally, let n s", "(cid:48)", "and n t", "(cid:48)", "be the two nodes in N that correspond to the two event mentions of interest e s and e t for DECI.", "An overall representation vector V = [ h s", "(cid:48)", ", h t", "(cid:48)", ", h", "(cid:48)", "s", "(cid:48)", ", h", "(cid:48)", "t", "(cid:48)", "] is formed", "(from both HL and H", "(cid:48)", "L )", "and fed into a two-layer feed-forward network with softmax in the end to produce the distribution P", "( . | D, e s , e t )", "over the two possible types for our DECI problem", "(whether there is a causal relation between e s and e t or not).", "The negative log-likelihood function L pred is then computed by: L pred = log P", "( y | D, e s , e t )", "( y is the golden type for DECI).", "The overall loss function to train our model is thus: L = L pred + L reg where is a trade-off parameter.", "Following prior work", "(Gao et al., 2019; Liu et al., 2020), we evaluate our models on two benchmark datasets for ECI, i.e., EventStoryLine and Causal-TimeBank.", "In particular, EventStoryLine", "(i.e., version 0.9)", "is introduced in", "(Caselli and Vossen, 2017), involving 258 documents, 22 topics, 4316 sentences, 5334 event mentions, 7805 intra-sentence and 46521 inter-sentence event mention pairs", "(1770 and 3855 of them are annotated with a causal relation respectively).", "Following", "(Gao et al., 2018), we group documents according to their topics and put the topics in the order based on their topic IDs.", "The documents in the last two topics are used for the development data while the remaining 20 documents are employed for a 5-fold Intra-sentence Inter-sentence Intra + Inter Model P R F1 P R F1 P R F1 OP", "cross-validation evaluation, using the same data split in", "(Gao et al., 2019; Liu et al., 2020).", "For Causal-TimeBank", "(Mirza, 2014a), this dataset consists of 184 documents, 6813 events, and 318 of 7608 event mention pairs annotated with a causal relations.", "Following", "(Liu et al., 2020), we do a 10-fold cross-validation evaluation using the same data split for this dataset.", "Note that as in", "(Liu et al., 2020), we only evaluate the ECI performance for intra-sentence events in Causal-TimeBank as the number of inter-sentence event mention pairs with the causal relation is very small", "(i.e., only 18 pairs).", "We tune the hyperparameters for our model on the development data of EventStoryLine and use the chosen parameters to train the models for both EventStoryLine and Causal-TimeBank.", "The selected values from the tuning process include: 1 e -5 for the learning rate of the Adam optimizer; 8 for the mini-batch size; 128 hidden units for all the feed-forward network and GCN layers; 2 layers for the GCN model", "( G = 2 ), = 0 .", "5 for the weight threshold, and = 0 .", "2 for the trade-off parameter in the loss function L .", "Finally, as mentioned earlier, we use the BERT base model", "(of 768 dimensions)", "for the pre-trained word embeddings", "(updated during the training)", "in this work.", "We compare our model", "(called RichGCN )", "with the state-of-the-art models for ECI in each benchmark dataset as follows.", "EventStoryLine : For this dataset, the following baselines are chosen for comparison:", "(i)", "OP : a dummy model used in", "(Caselli and Vossen, 2017)", "that assigns a causal relation to every pair of event mentions;", "(ii)", "LSTM", "(Gao et al., 2019): a dependency path based sequential model that is adopted from", "(Cheng and Miyao, 2017);", "(iii)", "Seq", "(Gao et al., 2019): another dependency path based sequential model that is originally developed for temporal relation prediction from", "(Choubey and Huang, 2017)", "and applied to ECI;", "(iv)", "BERT : a baseline method that takes the embedding vectors from BERT and performs ECI in", "(Liu et al., 2020).", "Note that", "(Liu et al., 2020)", "only reports the performance on intra-sentence events of EventStoryLine for this model.", "We reimplement and fine-tune the model to obtain its performance for inter-sentence events.", "Our reimplemented model for BERT achieves higher performance on intra-sentence ECI than those in", "(Liu et al., 2020);", "(v)", "KnowDis", "(Zuo et al., 2020): a BERT-based model that leverages additional data from distant supervision;", "(vi)", "LR+ and LIP", "(Gao et al., 2019): document structure-based models that have the current state-of-the-art performance for inter-sentence ECI; and", "(vii)", "Know", "(Liu et al., 2020): a BERT-based model that exploits ConceptNet and achieves the state-of-the-art performance for intra-sentence ECI.", "Table 1 shows the performance of the models.", "Causal-TimeBank : We use the following baselines for this dataset:", "(i)", "RB : a rule-based system in", "(Mirza, 2014b);", "(ii)", "ML : a feature-based model for ECI in", "(Mirza, 2014a); and", "(iii)", "BERT and Know", "(Liu et al., 2020): These are the same models BERT and Know", "(respectively)", "for EventStoryLine", "(both are based on BERT).", "We use the reported performance for the two models in", "(Liu et al., 2020)", "for a fair comparison.", "Know has the current state-of-the-art performance for this dataset in our 10-fold cross-validation setting.", "Note that the BERT model essentially corresponds to our RichGCN model when the interaction graphs G and G", "(cid:48)", "(thus the GCN model)", "are completely excluded.", "Table 2 presents the performance of these models on Causal-TimeBank.", "The most important observation from the tables is that the proposed model RichGCN significantly outperforms all the baselines for both intraand inter-sentence events on both EventStoryLine and Causal-TimeBank", "( p < 0 . 01 ), thus clearly demonstrating the effectiveness of the proposed model for DECI.", "In addition, we also see that BERT performs much worse than the document structure-based models LR+ , LIP and RichGCN .", "The sequential modeling of the context in BERT is thus not effective for document-level ECI, necessitating better mechanisms to encode document context", "(e.g., via the interaction graph of relevant objects as we do).", "Finally, the significant better performance of RichGCN over Know for intra-sentence ECI in different datasets confirms our intuition in the introduction that capturing context beyond sentences", "(i.e., document context as in RichGCN )", "is helpful for causal prediction of intra-sentence event pairs.", "This section analyzes the contribution of each component in the proposed model with an ablation study.", "In particular, we examine the following ablated models:", "(i)", "RichGCN x where x is one of the six interaction scores generated to compute the unified score a ij", "(i.e., a sentij , a corefij , a spanij , a depij , a contextij and a structij ).", "For instance, RichGCN a corefij refers to the RichGCN model where the coreference-based interaction score a coref ij is excluded in the computation of the overall score a ij in Equation 2;", "(ii)", "RichGCN Entity Nodes : the entity mention nodes in M are not included in the construction of interaction graph G in this model", "(i.e., N = D E only);", "(iii)", "RichGCN Event Nodes : the event mention nodes in E do not appear in the node set N of the interaction graph G in RichGCN", "(i.e., N = D M ).", "We directly use the representation vectors v i for the event mentions in the overall representation vector V for prediction in this model.", "Note that the interaction matrix A is also adapted accordingly in the ablated models RichGCN Entity Nodes and RichGCN Event Nodes ;", "(iv)", "RichGCN GraphCombination : this model does not combine the six generated interaction scores to compute an overall score a ij for A in Equation 2.", "Instead, it considers each of the six generated interaction scores as forming a separate interaction graph, thus generating six different graphs.", "The GCN model is then applied over these six graphs", "(using the same input representation vectors v i for the nodes n i in N ).", "The outputs of the GCN model for the same node n i", "(with different graphs)", "are then concatenated to produce the final representation vector for n i", "(i.e., serving as h i in the model).", "Note that we still employ the sparse graph idea", "(with G", "(cid:48)", "and the loss L reg )", "in this model;", "(v)", "RichGCN G and RichGCN G ' : these models exclude the full graphs G or G", "(cid:48)", "from RichGCN", "(respectively).", "The regularization loss L reg is thus not used and the vectors generated by the excluded graphs are not employed in the final vector V", "(i.e., h s", "(cid:48)", ", h t", "(cid:48)", ", h", "(cid:48)", "s", "(cid:48)", ", h", "(cid:48)", "t", "(cid:48)", ")", "for prediction in these cases; and", "(vi)", "RichGCN L reg : this model removes the regularization term L reg from the overall loss function L .", "Table 3 shows the performance of the models on the development data of EventStoryLine.", "As can be seen from the table, all the components are helpful for the proposed model RichGCN as eliminating any of them degrades the performance significantly for both intraand inter-sentence ECI.", "Notably, the worse performance of RichGCN G suggests that only using the sparse graph G ' for GCN to completely cancel small-weight edges in G is suboptimal as it might unexpectedly remove some useful (though small-weight) edges.", "Instead, the sparse graph should be exploited in conjunction with the full graph to minimize the overall contribution of small-weight edges as we do in RichGCN .", "To further demonstrate the benefits of document context modeling with GCN for intra-sentence ECI, we perform a cross-topic evaluation on EventStoryLine as in (Liu et al., 2020).", "In particular, as documents in different topics tend to mention different events in EventStoryLine, this section aims to train the models on a source topic, but evaluate them on other topics (i.e., the target topics) to reveal the topic generalization.", "Following (Liu et al., 2020), we choose topics T8, T13, and T18 in EventStoryLine as the source topics.", "For each of these source topics, the other topics are ranked according to their similarity with the source topic.", "As such, the similarity score between two topics t 1 and t 2 is based on = E t 1 E t 2 E t 1 E t 2 , where E t is the set of lemmas of event trigger words in topic t .", "Afterward, topics with the lowest, medium and highest similarity scores with the source topic are chosen as the target topics for evaluation.", "Table 4 present the intra-sentence ECI performance (i.e., F1 scores) of LIP , Know (Liu et al., 2020) and the proposed model RichGCN for this cross-topic experiment.", "It is clear from the table that RichGCN is significantly better than the baselines LIP and Know over different cross-topic settings, thereby further testifying to the generalization advantages of capturing document-level context via GCN for intra-sentence ECI in the proposed model.", "To suggest potential directions for future research, we analyze the errors made by the proposed model.", "In particular, we sample 100 event mention pairs in the development data of EventStoryLine whose causal relation cannot be predicted correctly by RichGCN .", "Afterward, we manually categorize these examples into different types that are described below:", "(i) Implicit causal relation : 33% of the errors in our model is due to the implicit indication of the causal relation between two event mentions in the context, necessitating common-sense knowledge to make correct causality prediction.", "For instance, consider the following document: South Sudan warns of war after Sudan bombs refugee camp. Military aircraft from Sudan crossed the new international border with South Sudan and dropped bombs Thursday in and around a camp filled with refugees, officials said. A government official initially reported deaths, but an American activist who spoke to aid workers at the camp later said there were no casualties. RichGCN cannot recognize the causal relation between two events bombs and deaths in this document.", "The reason is that there is no explicit context in the document to hint such a relation.", "The models need to rely on the common-sense causal order of bombs and deaths to correctly predict the label in this case.", "(ii) Preprocessing toolkit : Our model leverages several toolkit to obtain information to construct the interaction graph G , including the dependency parser, the entity mention detection and coreference (i.e., from Stanford CoreNLP), and the word sense disambiguation model.", "18% errors in our model originate from the errors in such toolkit that introduce noise into our model.", "For instance, Stanford CoreNLP incorrectly considers South Sudan and Sudan as the same entity in some of the examples.", "(iii) Lack of factuality modeling : Our model fails in this error type as it does not consider the factuality of the causal relation, treating hypothetical relations as the actual ones.", "This accounts for 5% of the errors.", "For instance, in the document above, the proposed model predicts the causal relation between war and bombs ; however, this is incorrect (not factual) due to the appearance of the word warns .", "(iv) Lack of fine-grained distinction : The errors in this type (accounting for 23%) are due to the failure to capture the fine-grained distinction of event mentions in the context, causing the confusion and incorrect predictions for the model.", "For instance, in the sentence Updated : July 02 , 2013 15:50 IST A 6. 1-magnitude earthquake which hit the Indonesian province of Aceh on Tuesday killed a child, injured dozens and destroyed buildings, sparking panic in a region devastated by the quake-triggered tsunami of 2004. , our model incorrectly predict killed and injured as having a causal relation with quake (underlined).", "This stems from the strong connection between the underlined quake and the 1-magnitude earthquake in the same sentence (i.e., due to the sentence boundaryand semantic-based interaction scores).", "Such strong connection leads the model to believe that killed and injured are also caused by the underlined quake as the 1-magnitude earthquake .", "The model would need to better encode the fine-grained distinction between the underlined quake and the 1-magnitude earthquake (i.e., of the year 2004 and 2013 respectively) to address this issue.", "Finally, our analysis shows that the other errors have to do with annotation errors (6%) and more complicated issues that cannot be categorized clearly.", "The early feature-based methods for ECI has explored different features and resources to improve the performance, including lexical and syntactic patterns (Hashimoto, 2019; Gao et al., 2019), causality cues/markers (e.g., because ) (Riaz and Girju, 2014a; Hidey and McKeown, 2016), statistical co-occurrence of events (Beamer and Girju, 2009; Do et al., 2011; Hu et al., 2017), temporal patterns (Mirza, 2014a; Ning et al., 2018), lexical semantics of events (Riaz and Girju, 2013, 2014b), and weakly supervised data (Hashimoto, 2019).", "Although we also apply related features and resource for ECI (e.g., syntax, WordNet), our model employs such resources to build interaction graphs for documents to induce more abstract representations with GCNs.", "Recently, deep learning has been applied to solve ECI, leveraging advanced language models (e.g., BERT) (Kadowaki et al., 2019; Zuo et al., 2020) and common-sense knowledge resources (i.e., ConceptNet) (Liu et al., 2020) to produce state-of-the-art performance.", "However, none of these deep learning models has explored document-context modeling with rich information for graph construction and GCNs as we do.", "Recently, there have been much interest in designing task-specific graphs to learn representation vectors for different NLP tasks, including sentence-level graphs for event factuality identification (Pouran Ben Veyseh et al., 2019) and event argument extraction (Pouran Ben Veyseh et al., 2020; Nguyen and Nguyen, 2021), and document-level graphs for relation extraction (Christopoulou et al., 2019; Nan et al., 2020; Tran et al., 2020) and event argument extraction (Veyseh et al., 2021).", "Our model is different from such related work in that we design document-level interaction graphs that are tailored to our ECI task.", "In addition, our model is also the first model that employs the inherent structure of external knowledge graphs (i.e., WordNet) to augment interaction graphs for documents in representation learning.", "We present a novel deep learning model for document-level ECI to address the limitation of prior deep learning models that only focus on causal prediction for inter-sentence event mention pairs.", "Our model designs interaction graphs to capture important objects and connections for input documents, leveraging GCNs to induce representation vectors for causal prediction.", "We introduce several information sources to enrich the interaction graphs based on discourse, syntax, and semantic information.", "The experiments confirm the effectiveness of the proposed information sources and models for DECI.", "In the future, we plan to extend our model to other related tasks, e.g., event coreference resolution (Nguyen et al., 2016).", "This research has been supported by the Army Research Office (ARO) grant W911NF-21-1-0112.", "This research is also based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the Better Extraction from Text Towards Enhanced Retrieval (BETTER) Program.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, ODNI, IARPA, the Department of Defense, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.", "This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations." ]
[ "method", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "objective", "other", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other" ]
[ "In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls.", "The difference between perplexity estimates from two neural language models (LMs) one trained on transcripts of speech produced by healthy participants and the other trained on transcripts from patients with dementia as a single feature for diagnostic classification of unseen transcripts has been shown to produce state-of-the-art performance.", "However, little is known about why this approach is effective, and on account of the lack of case/control matching in the most widely-used evaluation set of transcripts (De-mentiaBank), it is unclear if these approaches are truly diagnostic, or are sensitive to other variables.", "In this paper, we interrogate neural LMs trained on participants with and without dementia using synthetic narratives previously developed to simulate progressive semantic dementia by manipulating lexical frequency.", "We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the current state-of-the-art for models trained on transcript text exclusively.", "Alzheimer's Disease (AD) is a debilitating neurodegenerative condition which currently has no cure, and Dementia of the Alzheimer's Type (DAT) is one of the most prominent manifestations of AD pathology.", "Prior to availability of disease-modifying therapies, it is important to focus on reducing the emotional and financial burden of this devastating disease on patients, caregivers, and the healthcare system.", "Recent longitudinal studies of denotes equal contribution aging show that cognitive manifestations of future dementia may appear as early as 18 years prior to clinical diagnosis much earlier than previously believed (Rajan et al., 2015; Aguirre-Acevedo et al., 2016).", "With 30-40% of healthy adults subjectively reporting forgetfulness on a regular basis (Cooper et al., 2011), there is an urgent need to develop sensitive and specific, easy-to-use, safe, and cost-effective tools for monitoring AD-specific cognitive markers in individuals concerned about their cognitive function.", "Lack of clear diagnosis and prognosis, possibly for an extended period of time (i.e., many years), in this situation can produce uncertainty and negatively impact planning of future care (Stokes et al., 2015), and misattribution of AD symptoms to personality changes can lead to fam-ily conflict and social isolation (Boise et al., 1999; Bond et al., 2005).", "Delayed diagnosis also results in an estimated $7.9 trillion in medical and care costs (Association, 2018) due to high utilization of emergency care, amongst other factors, by patients with undiagnosed AD.", "Cognitive status is reflected in spoken language.", "As manual analysis of such data is prohibitively time-consuming, the development and evaluation of computational methods through which symptoms of AD and other dementias can be identified on the basis of linguistic anomalies observed in transcripts of elicited speech samples have inten-sified in the last several years (Fraser et al., 2016; Yancheva and Rudzicz, 2016; Orimaye et al., 2017).", "This work has generally employed a supervised machine learning paradigm, in which a model is trained to distinguish between speech samples produced by patients with dementia and those from controls, using a set of deliberately engineered or computationally identified features.", "However, on account of the limited training data available, overfitting is a concern.", "This is particularly problematic in DAT, where the nature of linguistic anomalies 1946 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 1946 -1957 July 5 10, 2020 Association for Computational Linguistics varies between patients, and with AD progression (Altmann and McClung, 2008).", "In the current study we take a different approach, focusing our attention on the perplexity of a speech sample as estimated by neural LMs trained on transcripts of the speech of participants completing a cognitive task.", "To date, the most successful approach to using LM perplexity as a sole distinguishing feature between narratives by dementia patients and controls was proposed by Fritsch et al. (2019) and replicated by Klumpp et al. (2018).", "The approach consists of training two recurrent neural LMs one on transcripts from patients with dementia and the other on transcripts from controls.", "The difference between the perplexities estimated with these two LMs results in very high classification accuracy (AUC: 0.92) reported by both studies.", "The explanation for this performance offered by Fritsch et al. (2019) relies on observations that patients with DAT describe the picture in an unforeseen way and their speech frequently diverts from the content of the picture, contains repetitions, incomplete utterances, and refers to objects in the picture using words like thing or something.", "This explanation, however, conflicts with the findings by Klumpp et al. (2018) that demonstrate similarly high classification accuracy (AUC: 0.91) with a single hidden layer non-recurrent neural network and bag-of-words input features, suggesting that while word sequences play a role, it may not be as large as previously believed by Fritsch et al. (2019).", "Klumpp et", "al.'s (2018) explanation contrasts local with global language properties of the picture descriptions being captured by recurrent neural LMs vs. the non-recurrent bag-of-words neural network classifier, respectively.", "Both of these explanations are based on informal qualitative observations of the data and are not entirely satisfying because both fail to explain the fact that it is precisely the difference between the control and dementia LMs that is able to discriminate between patients and controls.", "The individual LMs are not nearly as good at this categorization task.", "The objective of the current study is to quantify the extent to which the differences between neural LMs trained on language produced by DAT patients and controls reflect known deficits in language use in this disease in particular the loss of access to relatively infrequent terms that occurs with disease progression (Almor et al., 1999a).", "We approach this objective by interrogating trained neural LMs with two methods: interrogation by perturbation in which we evaluate how trained neural LMs respond to text that has been deliberately perturbed to simulate AD progression; and interrogation by interpolation in which we develop and evaluate hybrid LMs by interpolating between neural LMs modeling language use with and without dementia.", "We find neural LMs are progressively more perplexed by text simulating disease of greater severity, and that this perplexity decreases with increasing contributions of a LM trained on transcripts from patients with AD, but increases again when only this LM is considered.", "Motivated by these observations, we modify the approach of Fritsch et al. (2019) by incorporating an interpolated model and pre-trained word embeddings, with improvements in performance over the best results reported for models trained on transcript text exclusively.", "ADAD is a progressive disease, and the linguistic impairments that manifest reflect the extent of this progression (Altmann and McClung, 2008).", "In its early stages, deficits in the ability to encode recent memories are most evident.", "As the disease progresses, it affects regions of the brain that support semantic memory (Martin and Chao, 2001) knowledge of words and the concepts they represent and deficits in language comprehension and production emerge (Altmann and McClung, 2008).", "A widely-used diagnostic task for elicitation of abnormalities in speech is the Cookie Theft picture description task from the Boston Diagnostic Aphasia Examination (Goodglass, 2000), which is considered to provide an adequate approximation of spontaneous speech.", "In this task, participants are asked to describe a picture of a pair of children colluding in the theft of cookies from the top shelf of a raised cupboard while their mother distractedly washes dishes 1 .", "When used as a diagnostic instrument, the task can elicit features of AD and other dementias, such as pronoun overuse (Almor et al., 1999a), repetition (Hier et al., 1985; Pakhomov et al., 2018) and impaired recollection of key elements (or information units) from the picture (Giles et al., 1996).", "Due to the human-intensive nature of the analyses to detect such anomalies, automated methods present a desirable alternative.", "1 For a contemporary edition subscribing to fewer gender stereotypes see (Berube et al., 2018).", "2.2 Classification of Dementia Transcripts A number of authors have investigated automated methods of identifying linguistic anomalies in dementia.", "The most widely-used data set for these studies is the DementiaBank corpus (Becker et al., 1994), which we employ for the current work.", "In some of the early work on this corpus, Prud'hommeaux and Roark (2015) introduced a novel graph-based content summary score to distinguish between controls and dementia cases in this corpus with an area under the receiver operating characteristic curve (AUC) of 0.83.", "Much of the subsequent work relied on supervised machine learning, with a progression from manually engineered features to neural models mirroring general Natural Language Processing trends.", "For example, Fraser and Hirst (2016) report AD classification accuracy of over 81% on 10-fold cross-validation when applying logistic regression to 370 text-derived and acoustic features.", "In a series of papers, Orimaye et al. (2014; 2017; 2018) report tenfold cross-validation F-measures of up to 0.73 when applying a Support Vector Machine (SVM) to 21 syntactic and lexical features; SVM AUC on leave-pair-out cross-validation (LPOCV) of 0.82 and 0.93 with the best manually-engineered feature set and the best 1,000 of 16,903 lexical, syntactic and n-gram features (with selection based on information gain) respectively; and a LPOCV AUC of 0.73-0.83 across a range of deep neural network models with high-order n-gram features.", "Yancheva and Rudzicz (2016) derive topic-related features from word vector clusters to obtain an F-score of 0.74 with a random forest classifier 2 .", "Karlekar et al. (2018) report an utterance-level accuracy of 84.9% 3 with a convolutional/recurrent neural network combination when trained on text alone.", "While these results are not strictly comparable as they are based on different subsets of the data, use different cross-validation strategies and report different performance metrics, they collectively show that supervised models can learn to identify patients with AD using data from elicited speech samples.", "However, as is generally the case with supervised learning on small data sets, overfitting is a concern.", "Perplexity is used as an estimate of the fit between a probabilistic language model and a segment of pre-2", "viously unseen text.", "The notion of applying n-gram model perplexity (a derivative of cross-entropy) as a surrogate measure of syntactic complexity in spoken narratives was proposed by Roark et al. (2007) and applied to transcribed logical memory (story recall) test responses by patients with mild cognitive impairment (MCI: a frequent precursor to AD di-agnosis).", "In this work, sequences of part-of-speech (POS) tags were used to train bi-gram models on logical memory narratives, and then cross-entropy of these models was computed on held-out cross-validation folds.", "They found significantly higher mean cross-entropy values in narratives of MCI patients as compared to controls.", "Subsequent work expanded the use of POS cross-entropy as one of the language characteristics in a predictive model for detecting MCI (Roark et al., 2011).", "Perplexity can also be calculated on word tokens and serve as an indicator of an n-gram model's efficiency in predicting new utterances (Jelinek et al., 1977).", "Pakhomov et al (2010b) included word and POS LM perplexity amongst a set of measurements used to distinguish between speech samples elicited from healthy controls and patients with frontotemporal lobar degeneration (FTLD).", "A LM was trained on text from an external corpus of transcribed Cookie Theft picture descriptions performed by subjects without dementia from a different study.", "This model was then used to estimate perplexity of elicited speech samples in cases and controls, with significant differences between mean perplexity scores obtained from subjects with the semantic dementia variant of FTLD and controls.", "However, the authors did not attempt to use perplexity score as a variable in a diagnostic classification of FTLD or its subtypes.", "Collectively, these studies suggest elevated perplexity (both at the word and POS level) may indicate the presence of dementia.", "A follow-up study (Pakhomov et al., 2010a) used perplexity calculated with a model trained on a corpus of conversational speech unrelated to the picture description task, as part of a factor analysis of speech and language characteristics in FTLD.", "Results suggested that the general English LM wordand POS-level perplexity did not discriminate between FTLD subtypes, or between cases and controls.", "Taken together with the prior results, these results suggest that LMs trained on transcripts elicited using a defined task (such as the Cookie Theft task) are better equipped to distinguish between cases and controls 1948 than LM trained on a broader corpus.", "As the vocabulary of AD patients becomes progressively constrained, one might anticipate language use becoming more predictable with disease progression.", "Wankerl et al. (2016) evaluate this hypothesis using the writings of Iris Murdoch who developed AD later in life and eschewed editorial revisions.", "In this analysis, which was based on time-delimited train/test splits, perplexity decreased in her later output.", "This is consistent with recent work by Weiner et al. (2018) that found diminished perplexity was of some (albeit modest) utility in predicting transitions to AD.", "The idea of combining two perplexity estimates one from a model trained on transcripts of speech produced by healthy controls and the other from a model trained on transcripts from patients with dementia was developed by Wankerl et al. (2017) who report an AUC of 0.83 using n-gram LMs in a participant-level leave-one-out-crossvalidation (LOOCV) evaluation across the DementiaBank dataset.", "Fritsch et al. (2019) further improved performance of this approach by substituting a neural LM (a LSTM model) for the n-gram LM, and report an improved AUC of 0.92.", "However, it is currently unclear as to whether this level of accuracy is due to dementia-specific linguistic markers, or a result of markers of other significant differences between the case and control group such as age ( x = 71.4 vs. 63) and years of education ( x = 12.1 vs. 14.3) (Becker et al., 1994).", "Recurrent neural network language models (RNN-LM) (Mikolov et al., 2010) are widely used in machine translation and other applications such as sequence labeling (Goldberg, 2016).", "Recurrent Neural Networks (RNN) (Jordan, 1986; Elman, 1990) facilitate modeling sequences of indeterminate length by maintaining a state vector , S t \u0000 1 , that is combined with a vector representing the input for the next data point in a sequence, x t at each step of processing.", "Consequently, RNN-LMs have recourse to information in all words preceding the target for prediction, in contrast to n-gram models.", "They are also robust to previously unseen word sequences, which with na ve n-gram implementations (i.e., without smoothing or backoff) could result in an entire sequence being assigned a probability of zero.", "Straightforward RNN implementations are vulnerable to the so-called vanishing and ex-ploding gradient problems (Hochreiter, 1998; Pas-canu et al., 2012), which emerge on account of the numerous sequential multiplication steps that occur with backpropagation through time (time here indicating each step through the sequence to be mod-eled), and limit the capacity of RNNs to capture long-range dependencies.", "An effective way to address this problem involves leveraging Long Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997), which use structures known as gates to inhibit the flow of information during training, and a mechanism using a memory cell to preserve selected information across sequential training steps.", "Groups of gates comprise vectors with components that have values that are forced to be close to either 1 or 0 (typically accomplished using the sigmoid function).", "Only values close to 1 permit transmission of information, which disrupts the sequence of multiplication steps that occurs when backpropagating through time.", "The three gates used with typical LSTMs are referred to as Input, Forget and Output gates, and as their names suggest they govern the flow of information from the input and past memory to the current memory state, and from the output of each LSTM unit (or cell) to the next training step.", "LSTM LMs have been shown to produce better perplexity estimates than n-gram models (Sundermeyer et al., 2012).", "A known distinguishing feature of the speech of AD patients is that it tends to contain higher frequency words with less specificity than that of cognitively healthy individuals (e.g., overuse of pronouns and words like thing) (Almor et al., 1999b).", "Lexical frequency affects speech production; however, these effects have different origins in healthy and cognitively impaired individuals.", "A leading cognitive theory of speech production postulates a two-step process of lexical access in which concepts are first mapped to lemmas and, subsequently, to phonological representations prior to articulation (Levelt, 2001).", "In individuals without dementia, lexical frequency effects are evident only at the second step the translation of lemmas to phonological representations and do not originate at the pre-lexical conceptual level (Jescheniak and Levelt, 1994).", "In contrast, in individuals with dementia, worsening word-finding difficulties are attributed to progressive degradation of semantic networks that underlie lexical access at the concep-1949 tual level (Astell and Harley, 1996).", "While lexical frequency effects are difficult to control in unconstrained purely spontaneous language production, language produced during the picture description task is much more constrained in that the picture provides a fixed set of objects, attributes, and relations that serve as referents for the the person describing the picture.", "Thus, in the context of the current study, we expect to find that both healthy individuals and patients with dementia describing the same picture would attempt to refer to the same set of concepts, but that patients with dementia would tend to use more frequent and less specific words due to erosion of semantic representations leading to insufficient activation of the lemmas.", "Changes in vocabulary have been reported in the literature as one of the most prominent linguistic manifestations of AD (Pekkala et al., 2013; Wilson et al., 1983; Rohrer et al., 2007).", "We do not suggest that other aspects of language such as syntactic complexity, for example, should be excluded; although, there has been some debate as to the utility of syntactic complexity specifically as a distinguishing feature (see (Fraser et al., 2015)).", "For LM training and evaluation we used transcripts of English language responses to the Cookie Theft component of the Boston Diagnostic Aphasia Exam (Goodglass, 2000), provided as part of the DementiaBank database (Becker et al., 1994).", "Transcripts (often multiple) are available for 169 subjects classified as having possible or probable DAT on the basis of clinical or pathological examination, and 99 patients classified as controls.", "For interrogation by perturbation , we used a set of six synthetic Cookie Theft picture description narratives created by Bird et al. (2000) to study the impact of semantic dementia on verb and noun use in picture description tasks.", "While Bird et al. (2000) focused on semantic dementia, a distinct condition from DAT, these synthetic narratives were not based on patients with semantic dementia.", "Rather, they were created to manipulate lexical frequency by first compiling a composite baseline narrative from samples by healthy subjects, and then removing and/or replacing nouns and verbs in that baseline with words of higher lexical frequency (e.g., mother vs. woman vs. she).", "Lexical frequency was calculated using the Celex Lexical Database (LDC96L14) and words were aggregated into groups based on four log frequency bands (0.5 1.0, 1.0 1.5, 1.5 2.0, 2.5 3.0: e.g., words in the 0.5 1.0 band occur in Celex more than 10 times per million).", "These narratives are well-suited to the study of lexical retrieval deficits in DAT in which loss of access to less frequent words is observed with disease progression (Pekkala et al., 2013).", "In order to calculate mean log lexical frequency on the DementiaBank narratives , we used the SUBTLEX us corpus shown to produce lexical frequencies more consistent with psycholinguistic measures of word processing time than those calculated from the Celex corpus (Brysbaert and New, 2009).", "The DementiaBank narratives were processed using NLTK's 4 implementation of the TnT part-of-speech tagger (Brants, 2000) trained on the Brown corpus (Francis and Kucera, 1979).", "Following Bird et al. (2000) only nouns and verbs unique within the narrative were used to calculate mean log lexical frequency.", "We did not stem the words in order to avoid creating potentially artifi-cially high/low frequency items.", "To validate the mean log lexical frequency values obtained with the SUBTLEX us corpus, we compared the log lexical frequency means for the six narratives developed by Bird et al. (2000) with their frequency band values using Spearman's rank correlation and found them to be perfectly correlated ( = 1.0).", "The text of DementiaBank transcripts was extracted from the original CHAT files (Macwhinney, 2000).", "The transcripts as well as the six synthetic narratives were lowercased and pre-processed by removing speech and non-speech noise as well as pause fillers (um's amd ah's) and punctuation (ex-cepting the apostrophe).", "Prior work with neural LMs in this context has used randomly instantiated models.", "We wished to evaluate the utility of pre-training for this task both pretraining of the LSTM in its entirety and pre-training of word embeddings alone.", "For the former we used a LSTM trained on the WikiText-2 dataset (Merity et al., 2016) provided with the GluonNLP package 5 .", "200-dimensional word embeddings, including embeddings augmented with subword information, (Bojanowski et al., 2017) were developed using the Semantic Vectors package 6 and 4 Natural Language Toolkit: www.nltk.org 5 https://github.com/dmlc/gluon-nlp 6 https://github.com/semanticvectors/semanticvectors 1950 trained using the skipgram-with-negative-sampling algorithm of Mikolov et al. (2013) for a single iteration on the English Wikipedia (10/1/2019 edition, pre-processed with wikifl.pl 7 ) with a window radius of five 8 .", "We report results using skipgram embeddings augmented with subword information as these improved performance over both stochastically-initialized and WikiText-2-pretrained LSTMs in preliminary experiments.", "3.3 Training We trained two sets of dementia and control LSTM models.", "The first set was trained in order to replicate the findings of Fritsch et al. (2019), using the same RWTHLM package (Sundermeyer et al., 2014) and following their methods as closely as possible in accordance with the description provided in their paper.", "Each model's cross-entropy loss was optimized over 20 epochs with starting learning rate optimization performed on a heldout set of 10 transcripts.", "The second set was trained using the GluonNLP averaged stochastic gradient weight-dropped LSTM (standard-lstm-lm-200 architecture) model consisting of 2 LSTM layers with word embedding (tied at input and output) and hidden layers of 200 and 800 dimensions respectively (see Merity et al. (2017) for full details on model architecture).", "In training the GluonNLP models, the main departure from the methods used by Fritsch et al. (2019) involved not using a small heldout set of transcripts to optimize the learning rate because we observed that the GluonNLP models converged well prior to the 20th epoch with a starting learning rate of 20 which was used for all stochastically initialized models.", "With pre-trained models we used a lower starting learning rate of 5 to preserve information during subsequent training on DementiaBank.", "All GluonNLP models were trained using batch size of 20 and back propagation through time (BPTT) window size of 10.", "During testing, batch size was set to 1 and BPTT to the length of the transcript (tokens).", "Unseen transcript perplexity was calculated as e loss .", "As subjects in the DementiaBank dataset participated in multiple assessments, there are multiple transcripts for most of the subjects.", "In order to avoid biasing the models to individual subjects, we 7 Available at https://github.com/facebookresearch/fastText 8 Other hyperparameters per (Cohen and Widdows, 2018) followed the participant-level leave-one-out cross-validation (LOOCV) evaluation protocol of Fritsch et al. (2019) whereby all of the picture description transcripts for one participant are held out in turn for testing and the LMs are trained on the remaining transcripts.", "Perplexities of the LMs are then obtained on the heldout transcripts, resulting in two perplexity values per transcript, one from the LM trained on the dementia ( P dem ) and control ( P con ) transcripts.", "Held-out transcripts were scored using these perplexity values, as well as by the difference ( P con \u0000 P dem ) between them.", "For interrogation by perturbation , we estimated the perplexity of our models for each of the six synthetic narratives of Bird et al. (2000).", "We reasoned that an increase in P con and a decrease in P dem as words are replaced by higher-frequency alternatives to simulate progressive lexical retrieval deficits would indicate that these models were indeed capturing AD-related linguistic changes.", "For interrogation by interpolation , we extracted the parameters from all layers of paired LSTM LMs after training, and averaged these as LM dem +(1 \u0000 ) LM con to create interpolated models.", "We hypothesized that a decrease in perplexity estimates for narratives emulating severe dementia would occur as (the proportional contribution of LM dem ) increases.", "The results of evaluating classification accuracy of the various language models are summarized in Table", "1. The 95% confidence interval for GluonNLP models was calculated from perplexity means obtained across ten LOOCV iterations with random model weight initialization on each iteration.", "The RWTHLM package does not provide support for GPU acceleration and requires a long time to perform a single LOOCV iteration (approximately 10 days in our case).", "Since the purpose of using the RWTHLM package was to replicate the results previously reported by Fritsch et al. (2019) that were based on a single LOOCV iteration and we obtained the exact same AUC of 0.92 on our first LOOCV iteration with this approach, we did not pursue additional LOOCV iterations.", "However, we should note that we obtained an AUC of 0.92 for the difference between P con and P dem on two of the ten LOOCV iterations with the GluonNLP LSTM model.", "Thus, we believe that the GluonNLP 1951 CONTROL DEMENTIA CONTROL-DEMENTIA MODEL AUC 95% CI AUC 95% CI AUC 95% CI RWTHLMLSTM 0.80 0.64 0.92 GluonNLP LSTM 0.80 0.002 0.65 0.002 0.91 0.004 Table 1: Classification accuracy using individual models' perplexities and their difference for various models.", "LSTM model has equivalent performance to the RWTHLMLSTM model.", "Having replicated results of previously published studies and confirmed that using the difference in perplexities trained on narratives by controls and dementia patients is indeed the current state-of-the-art, we now turn to explaining why the difference between these LMs is much more successful than the individual models alone.", "First, we used the six Cookie Theft narratives designed to simulate semantic dementia to examine the relationship between P con and P dem with GluonNLP LSTM LMs and log lexical frequency bands.", "The results of this analysis are illustrated in Figure 1 and show that P dem is higher than P con on narratives in the lower log frequency bands (less simulated impairment) and lower in the higher log frequency bands (more simulated impairment).", "We confirmed these results by calculating mean log lexical frequency on all DementiaBank narratives and fitting a linear regression model to test for associations with perplexities of the two LMs.", "The regression model contained mean lexical frequency as the dependent variable and P dem and P con as independent variables, adjusted for age, education and the length of the picture description narrative.", "In order to avoid likely practice effects across multiple transcripts, we only used the transcript obtained on the initial baseline visit; however, we did confirm these results by using all transcripts to fit mixed effects models with random slopes and intercepts in order to account for the correlation between transcripts from the same subject (mixed effects modeling results not shown).", "The results demonstrate that the association between perplexity and lexical frequency is significant and positive for the control LM (coeff: 0.563, p < 0.001) and negative for dementia LM (coeff: -0.543, p < 0.001).", "Age, years of education, and length of the narrative were not significantly associated with lexical frequency in this model.", "These associations show that the control LM and dementia LM are more surprised by narratives containing words of higher lexical frequency and lower lexical frequency respectively.", "If the use of higher lexical frequency items on a picture description task portends a semantic deficit, then this particular pattern of results explains why it is the difference between the two models that is most sensitive to manifestations of dementia and suggests that there is a point at which the two models become equally surprised with a difference between their perplexities close to zero.", "In Figure 1, that point is between log lexical frequency bands of 2.0 and 2.5 corresponding to the mild to moderate degree of semantic impairment reported by Bird et al. (2000).", "Notably, in the clinical setting, the mild forms of dementia such as mild cognitive impairment and mild dementia are also particularly challenging and require integration of multiple sources of evidence for accurate diagnosis (Knopman and Petersen, 2014).", "The results of our interpolation studies are shown in Figure", "2. Each point in the figure shows the average difference between the perplexity estimate of a perturbed transcript ( Px ) and the perplexity estimate for the unperturbed ( Po : frequency band 0) sample for this model 9 .", "While all models tend 9 We visualized this difference because perplexities at =0.5 were generally higher, irrespective of whether component models were initialized stochastically, or had pre-trained word embeddings in common.", "Perplexities of =0.75 models were slightly lower than those of their majority constituents.", "Table 2: Performance of randomly-instantiated and pre-trained (subword-based skipgram embeddings) interpolated two perplexity models across 10 repeated per-participant LOOCV runs.", "indicates the proportional contribution of the dementia model.", "ACC eer gives the accuracy at equal error rate.", "Best results are in boldface , and results using the approach of Fritsch et al. (2019) are in italics .", "to find the increasingly perturbed transcripts more perplexing than their minimally perturbed counterparts, this perplexity decreases with increasing contributions of the dementia LM.", "However, when only this model is used, relative perplexity of the perturbed transcripts increases.", "This indicates that the pure dementia LM may be responding to linguistic anomalies other than those reflecting lack of access to infrequently occurring terms.", "We reasoned that on account of this, the =0.75 model may provide a better representation of dementia-related linguistic changes.", "To evaluate this hypothesis, we assessed the effects on performance of replacing the dementia model with this interpolated model.", "The results of these experiments (Table 2) reveal improvements in performance with this approach, with best AUC (0.941) and accuracy at equal error rate (0.872) resulting from the combination of interpolation 10 with pre-trained word embeddings.", "That pre-trained embeddings further improve performance is consistent with the observation that the elevation in perplexity when transitioning from =0.75 to =1.0 is much less pronounced in these models (Figure 3).", "These results are significantly better than those reported by Fritsch et al (2019), and our reimplementation of their approach.", "These improvements in performance appear to be attributable to a smoothing effect on the perplexity of the modified dementia models in response to unseen dementia cases.", "Over ten repeated LOOCV iterations, average perplexity on held-out dementia cases was significantly lower than that of the baseline dementia' model (51.1 0.81) for both the =0.75 (47.3 0.32) and pre-trained embeddings (44.8 0.53) models.", "This trend is further accentuated with the severity of dementia for transcripts corresponding to a mini-mental state 10 Simply weighting the difference in model perplexities does not perform as well as interpolating model weights, with at best a 0.001 improvement in AUC over the baseline.", "exam (MMSE) 10 (n=16), average perplexities are 148.29 7.69, 105.01 3.48 and 121.86 7.67 for baseline dementia', =0.75 and pre-trained embeddings models respectively.", "In both cases, average perplexity of the interpolated ( =0.75) pretrained embeddings model fell between those of the exclusively pre-trained (lowest overall) and exclusively interpolated (lowest in severe cases) models.", "A practical issue for automated methods to detect dementia concerns establishing their accuracy at earlier stages of disease progression, where a readily disseminable screening tool would arguably have greatest clinical utility, especially in the presence of an effective disease-modifying therapy.", "To this end, Fritsch et al. (2019) defined a screen-ing scenario in which evaluation was limited to participants with a last available MMSE of 21 or more, which corresponds to a range of severity encompassing mild, questionable or absent dementia (Perneczky et al., 2006).", "In this scenario, classification accuracy of the paired perplexity' LSTM based model was only slightly lower (AUC: 0.87) than the accuracy on the full range of cognitive impairment (AUC: 0.92).", "We found similar performance with our models.", "When limiting evaluation to those participants with a last-recorded MMSE \u0000 21, average AUCs across 10 LOOCV iterations were 0.836 0.014, 0.879 0.01, 0.893 0.004, and 0.899 0.012 for the baseline (Fritsch et al (2019)), pretrained embeddings, interpolated ( =0.75) and interpolated ( =0.75) with pretrained embeddings variants, respectively.", "These results support the notion that paired neural LMs can be used effectively to screen for possible dementia at earlier stages of cognitive impairment.", "The contributions of our work can be summarized as follows.", "First, our results demonstrate that the relationship between LM perplexity and lexical frequency is consistent with the phenomenology of DAT and its deleterious effects on patients' vocabulary.", "We show that the two perplexities approach is successful at distinguishing between cases and controls in the DementiaBank corpus because of its ability to capture specifically linguistic manifestations of the disease.", "Second, we observe that interpolating between dementia and control LMs mitigates the tendency of dementia-based LMs to be surprised by transcripts indicating severe dementia, which is detrimental to performance when the difference between these LMs is used as a basis for classification.", "In addition, we find a similar smoothing effect when using pre-trained word embeddings in place of a randomly instantiated word embedding layer.", "Finally, we develop a modifica-tion of Fritsch et al's two perplexity approach that is consistent with these observations replacing the dementia model with an interpolated variant, and introducing pre-trained word embeddings at the embedding layer.", "Both modifications exhibit significant improvements in performance, with best results obtained by using them in tandem.", "Though not strictly comparable on account of differences in segmentation of the corpus amongst others, we note the performance obtained also exceeds that reported with models trained on text alone in prior research.", "Code to reproduce the results of our experiments is available on GitHub 11 .", "While using transcript text directly is appealing in its simplicity, others have reported substantial improvements in performance when POS tags and paralinguistic features are incorporated, suggesting fruitful directions for future research.", "Furthermore, prior work on using acoustic features shows that they can contribute to discriminative models (Konig et al., 2015); however, Dementia Bank audio is challenging for acoustic analysis due to poor quality and background noise.", "Lastly, while our results do support the claim that classification occurs on the basis of dementia-specific linguistic anomalies, we also acknowledge that DementiaBank remains a relatively small corpus by machine learning standards, and that more robust validation would require additional datasets.", "We offer an empirical explanation for the success of the difference between neural LM perplexities in discriminating between DAT patients and controls, involving lexical frequency effects.", "Interrogation of controland dementia-based LMs using synthetic transcripts and interpolation of parameters reveals inconsistencies harmful to model performance that can be remediated by incorporating interpolated models and pre-trained embeddings, with significant performance improvements.", "11 https://github.com/treversec/tale of two perplexities 1954 References Daniel C Aguirre-Acevedo, Francisco Lopera, Eliana Henao, Victoria Tirado, Claudia Munoz, Margarita Giraldo, Shrikant I Bangdiwala, Eric M Reiman, Pierre N Tariot, Jessica B Langbaum, et al. 2016." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "other" ]
[ "Cross-lingual summarization is the task of generating a summary in one language given a text in a different language.", "Previous works on cross-lingual summarization mainly focus on using pipeline methods or training an end-to-end model using the translated parallel data.", "However, it is a big challenge for the model to directly learn cross-lingual summarization as it requires learning to understand different languages and learning how to summarize at the same time.", "In this paper, we propose to ease the cross-lingual summarization training by jointly learning to align and summarize.", "We design relevant loss functions to train this framework and propose several methods to enhance the isomorphism and cross-lingual transfer between languages.", "Experimental results show that our model can outperform competitive models in most cases.", "In addition, we show that our model even has the ability to generate cross-lingual summaries without access to any cross-lingual corpus.", "Neural abstractive summarization has witnessed rapid growth in recent years.", "Variants of sequence-to-sequence models have shown to obtain promising results on English (See et al., 2017) or Chinese summarization datasets.", "However, Cross-lingual summarization , which aims at generating a summary in one language from input text in a different language, has been rarely studied because of the lack of parallel corpora.", "Early researches on cross-lingual abstractive summarization are mainly based on the summarization-translation or translation-summarization pipeline paradigm and adopt different strategies to incorporate bilingual features (Leuski et al., 2003; Orasan and Chiorean, 2008; Wan et al., 2010; Wan, 2011) into the pipeline model.", "Recently, Shen et al. (2018) first propose a neural cross-lingual summarization system based on a large-scale corpus.", "They first translate the texts automatically from the source language into the target language and then use the teacher-student framework to train a cross-lingual summarization model.", "Duan et al. (2019) further improve this teacher-student framework by using genuine summaries paired with the translated pseudo source sentences to train the cross-lingual summarization model.", "Zhu et al. (2019) propose a multi-task learning framework to train a neural cross-lingual summarization model.", "Cross-lingual summarization is a challenging task as it requires learning to understand different languages and learning how to summarize at the same time.", "It would be difficult for the model to directly learn cross-lingual summarization.", "In this paper, we explore this question: can we ease the training and enhance the cross-lingual summarization by establishing alignment of context representations between two languages?", "Learning cross-lingual representations has been proven a beneficial method for cross-lingual transfer for some downstream tasks (Klementiev et al., 2012; Artetxe et al., 2018; Ahmad et al., 2019; Chen et al., 2019).", "The underlying idea is to learn a shared embedding space for two languages to improve the model's ability for cross-lingual transfer.", "Recently, it has been shown that this method can also be applied to context representations (Aldar-maki and Diab, 2019; Schuster et al., 2019).", "In this paper, we show that the learning of cross-lingual representations is also beneficial for neural crosslingual summarization models.", "We propose a multi-task framework that jointly learns to summarize and align context-level representations.", "Concretely, we first integrate monolingual summarization models and cross-lingual summarization models into one unified model and then build two linear mappings to project the context representation from one language to the other.", "We then design several relevant loss functions to learn the mappers and facilitate the cross-lingual summarization.", "In addition, we propose some methods to enhance the isomorphism and cross-lingual transfer between different languages.", "We also show that the learning of aligned representation enables our model to generate cross-lingual summaries even in a fully unsupervised way where no parallel crosslingual data is required.", "We conduct experiments on several public crosslingual summarization datasets.", "Experiment results show that our proposed model outperforms competitive models in most cases, and our model also works on the unsupervised setting.", "To the best of our knowledge, we are the first to propose an unsupervised framework for learning neural crosslingual summarization.", "In summary, our primary contributions are as follow: We propose a framework that jointly learns to align and summarize for neural cross-lingual summarization and design relevant loss functions to train our system.", "We propose a procedure to train our crosslingual summarization model in an unsupervised way.", "The experimental results show that our model outperforms competitive models in most cases, and our model has the ability to generate cross-lingual summarization even without any cross-lingual corpus.", "We show the overall framework of our proposed model in Figure 1.", "Our model consists of two encoders, two decoders, two linear mappers, and two discriminators.", "Suppose we have an English source text x = { x 1 , . . . , x m } and a Chinese source text y = { y 1 , . . . , y n } , which consist of m and n words, respectively.", "The English encoder EX (res. Chinese encoder EY ) transforms x (res. y ) into its context representation z x (res. z y ), and the decoder DX (res. DY ) reads the memory z x (res. z y ) and generates the corresponding English summary x (res. Chinese summary y ).", "The mappers MX : Z x Z y and MY : Z y Z x are used for transformations between z x and Figure 1: The overall framework of our proposed model.", "z y , and the discriminators DX and DY are used for discriminating between the encoded representations and the mapped representations.", "Taking English-to-Chinese summarization for example, our model generates cross-lingual summaries as follows: First we use the English encoder to get the English context representations, then we use the mapper to map English representations into Chinese space.", "Lastly the Chinese decoder is used to generate Chinese summaries.", "In Section 3, we describe the techniques we adopt to enhance the cross-lingual transferability of the model.", "In Section 4 and Section 5, we describe the unsupervised training objective and supervised training objective for cross-lingual summarization, respectively.", "In our model, we adopt Transformer (Vaswani et al., 2017) as our encoder and decoder, which is the same with previous works (Duan et al., 2019; Zhu et al., 2019).", "The encoder and decoder are connected via cross-attention.", "The cross-attention is implemented as the following dot-product attention module: Attention ( S, T ) = softmax (cid:18) T S (cid:62) d k (cid:19) S (1) where S is the packed encoder-side contextual representation, T is the packed decoder-side contextual representation and d k is the model size.", "In the dot-product module, it would be beneficial if the contextual representations of the encoder and decoder have the same distributions.", "However, in the cross-lingual setting, the encoder and decoder deal with different languages and thus the distributions of the learned contextual representations may be inconsistent.", "This motivates us to explicitly learn alignment relationships between languages.", "To make the contextual representations of two languages easier to be aligned, we introduce the normalization technique into the transformer model.", "Normalizing the word representations has been proved an effective technique on word alignment (Xing et al., 2015).", "After normalization, two sets of embeddings are both located on a unit hypersphere, which makes them easier to be aligned.", "We achieve this by introducing the pre-normalization technique and replacing the LayerNorm with ScaleNorm (Nguyen and Salazar, 2019): o (cid:96) +1 = LayerNorm ( o (cid:96) + F (cid:96) ( o (cid:96) )) o (cid:96) +1 = o (cid:96) + F (cid:96) ( ScaleNorm ( o (cid:96) )) where F (cid:96) is the (cid:96) -th layer and o (cid:96) is its input.", "The formula for calculating ScaleNorm is: ScaleNorm( x ; g ) = g x/ (cid:107) x (cid:107) (2) where g is a hyper-parameter.", "An additional benefit of ScaleNorm is that after being normalized, the dot-product of two vectors u (cid:62) v is equivalent to their cosine distance u (cid:62) v (cid:107) u (cid:107)(cid:107) v (cid:107) , which may benefit the attention module in Transformer.", "We will conduct experiments to verify this.", "A key assumption of aligning the representations of two languages is the isomorphism of learned monolingual representations.", "Some researchers show that the isomorphism assumption weakens when two languages are etymologically distant (Sgaard et al., 2018; Patra et al., 2019).", "However, Ormazabal et al. (2019) show that this limitation is due to the independent training of two separate monolingual embeddings, and they suggest to jointly learn cross-lingual representations on monolingual corpora.", "Inspired by Ormazabal et al. (2019), we take the following approaches to address the isomorphism problem.", "First, we combine the English and Chinese summarization corpora and build a unified vocabulary.", "Second, we share encoders and decoders in our model.", "Sharing encoders and decoders can also enforce the model to learn shared contextual representations across languages.", "For the shared decoder, to indicate the target language, we set the first token of the decoder to specify the language the module is operating with.", "Third, we train several monolingual summarization steps before cross-lingual training, as shown in the first line in Alg.", "1.", "The pre-trained monolingual summarization steps also allow the model to learn easier monolingual summarization first, then further learn cross-lingual summarization, which may reduce the training dif-ficulty.", "We describe the objective of unsupervised crosslingual summarization in this section.", "The whole training procedure can be found in Alg.", "1.", "Summarization Loss Given an English text-summary pair x and x (cid:48) , we use the encoder EX and the decoder DX to generate the hypothetical English summary x that maximizes the output summary probability given the source text: x = arg max x P ( x | x ) .", "We adopt maximum log-likelihood training with cross-entropy loss between hypothetical summary x and gold summary x (cid:48) : z x = EX ( x ) , x = DX ( z x ) L summ X ( x , x (cid:48) )= T (cid:88) t =1 log P (cid:0) x (cid:48) t | x <t , z x (cid:1) (3) where T is the length of x (cid:48) .", "The Chinese summarization loss L summ Y is similarly defined for the Chinese encoder EY and decoder DY .", "Generative and Discriminative Loss Given an English source text x and a Chinese source text y , we use the encoder EX and EY to obtain the contextual representations z x = { z x 1 , . . . , z x m } and z y = { z y 1 , . . . , z y n } , respectively.", "For Zh-to-En summarization, we use the mapper MY to map z y into the English context space: z y x = MY ( z y ) .", "We hope the mapped distribution z y x and the real English distribution z x could be as similar as possible such that the English decoder can deal with cross-lingual summarization just like dealing with monolingual summarization.", "To learn this mapping, we introduce two discriminators and adopt the adversarial training (Good-fellow et al., 2014) technique.", "We optimize the mappers at the sentence-level 1 rather than word-level, which is inspired by Aldarmaki and Diab (2019) where they found learning the aggregate mapping can yield a more optimal solution compared to word-level mapping.", "Concretely, we first average the contextual representations: z y x = 1 n n (cid:88) i =1 ( z y x ) i , z x = 1 m m (cid:88) i =1 z x i (4) Then we train the discriminator DX to discriminate between z y x and z x using the following discriminative loss: L dis X ( z y x , z x ) = log PDX (src = 0 | z y x ) log PDX (src = 1 | z x ) (5) where PDX (src | z ) is the predicted probability of DX to distinguish whether z is coming from the real English representation (src = 1) or from the mapper MY (src = 0) .", "In our framework, the encoder EX and mapper MY together make up the generator.", "The generator tries to generate representations which would confuse the discriminator, so its objective is to maximize the discriminative loss in Eq.", "5.", "Alternatively, we train the generator to minimize the following generative loss: L gen Y ( z y x , z x ) = log PDX (src = 1 | z y x ) log PDX (src = 0 | z x ) (6) The discriminative loss L dis Y ( z x y , z y ) for DY , generative loss L gen X ( z x y , z y ) for EY and MX are similarly defined.", "Notice that since we use vector averaging and adopt the linear transformation, it does not matter whether we apply the linear mapping before or after averaging the contextual representations, and the learned sentence-level mappers can be directly applied to word-level mappings.", "Cycle Reconstruction Loss Theoretically, if we do not add additional constraints, there exist infinite mappings that can align the distribution of z x and z y , and thus the learned mappers may be invalid.", "In order to learn better mappings, we introduce the cycle reconstruction loss and back-translation loss to enhance them.", "Given z x , we first use MX to map it to the Chinese space, and then use MY to map it back: z x y = MX ( z x ) , z x = MY ( z x y ) (7) We force z x and z x to be consistent, constrained by the following cycle reconstruction loss: L cyc X ( z x , z x ) = (cid:107) z x z x (cid:107) (8) The cycle reconstruction loss L cyc Y for z y and z y is similarly defined.", "Back-Translation Loss The cycle-reconstructed representation z x in Eq.", "8 can be regarded as augmented data to train the decoder, which is similar to the back-translation in the Neural Machine Translation area.", "Concretely, we use the decoder DX to read z x and generate the hypothetical summary x .", "The back-translation loss is defined as the cross-entropy loss between x and gold summary x (cid:48) : x = DX ( z x ) L back X ( z x ) = T (cid:88) t =1 log P (cid:0) x (cid:48) t | x <t , z x (cid:1) (9) The back-translation loss enhances not only the generation ability of the decoder but also the effectiveness of the mapper.", "The back-translation loss L back Y for z y is similarly defined.", "Total Loss The total loss for optimizing the encoder, decoder, and mapper of the English side is weighted sum of the above losses: LX = L summ X + 1 L gen X + 2 L cyc X + 3 L back X (10) where 1 , 2 , and 3 is the weighted hyper-parameters.", "The total loss of the Chinese side is similarly defined, and the complete loss of our model is the sum of English loss and Chinese loss: L = LX + LY (11) The total loss for optimizing the discriminators is: L dis = L dis X + L dis Y (12) 5 Supervised Training Objective The supervised training objective contains the same summarization loss in unsupervised training objective (Eq. 3).", "In addition, it has X-summarization loss and reconstruction loss.", "Algorithm 1 Cross-lingual summarization Input: English summarization data X and Chinese summarization data Y .", "1: Pre-train English and Chinese monolingual summarization several epochs on X and Y .", "2: for i = 0 to max iters do 3: Sample a batch from X and a batch from Y 4: if unsupervised then 5: for k = 0 to dis iters do 6: Update DX and DY on L dis in Eq.5.", "7:", "(a) Update EX , EY , DX , and DY 8: on L summ in Eq.", "3.", "9:", "(b) Update EX , EY , MX , and MY 10: on L gen in Eq.", "6.", "11:", "(c) Update EX , EY , MX , and MY 12: on L cyc in Eq.", "8. 13:", "(d) Update MX , MY , DX , and DY 14: on L back in Eq.", "9. 15: else if supervised then 16:", "(a) Upate EX , EY , DX , and DY 17: on L summ in Eq.", "3.", "18:", "(b) Update EX , EY , DX , and DY 19: on L xsumm in Eq.", "13. 20:", "(c) Update EX , EY , MX , and MY 21: on L rec in Eq.", "14. X-Summarization Loss Given a parallel English source text x and Chinese summary y (cid:48) .", "We use EX , MX , and DY to generate the hypothetical Chinese summary y , then train them with cross-entropy loss: z x = EX ( x ) , z x y = MX ( z x ) , y = DY ( z x y ) L xsumm X ( x , y (cid:48) ) = T (cid:88) t =1 log P (cid:0) y (cid:48) t | y <t , x (cid:1) (13) The X-summarization loss for a Chinese text y and English summary x (cid:48) is similarly defined.", "Reconstruction Loss Since the cross-lingual summarization corpora are constructed by translating the texts to the other language, the English texts and the Chinese texts are parallel to each other.", "We can build a reconstruction loss to align the sentence representation for the parallel English and Chinese texts.", "Specifically, supposing x and y are parallel source English and Chinese texts, we first use EX and EY to obtain contextual representations z x and z y , respectively.", "Then we average the contextual representations to get their sentence representations and use the mappers to map them into the other language.", "Since the English and Chinese texts are translations to each other, the semantics of their sentence representations should be the same.", "Thus we design the following reconstruction loss: z x = 1 m m (cid:88) i =1 z x i , z y x = 1 n n (cid:88) i =1 ( z y x ) i L rec X ( z x , z y x ) = (cid:107) z x z y x (cid:107) (14) and L rec Y is similarly defined.", "Notice that the generative and discriminative loss, cycle-construction loss, and back-translation loss are unnecessary here because we can directly use aligned source text with objective 14 to align the context representations.", "where 1 and 2 is the weighted hyper-parameters.", "The total loss of the Chinese side is similarly defined.", "We conduct experiments on English-to-Chinese (En-to-Zh) and Chinese-to-English (Zh-to-En) summarizations.", "Following Duan et al. (2019), we translate the source texts to the other language to form the (pseudo) parallel corpus.", "Since they do not release their training data, we translate the source text ourselves through the Google translation service.", "Notice that Zhu et al. (2019) translate the summaries rather than source texts.", "Since Duan et al. (2019) use Gigaword and DUC2004 datasets for experiments while Zhu et al. (2019) use LCSTS and CNN/DM for experiments, we conduct experiments on all the 4 datasets.", "When comparing with Duan et al. (2019) and Zhu et al. (2019), we use the same number of translated parallel data for training.", "Due to limited computing resources, we only do unsupervised experiments on gigaword and LCSTS datasets.", "Notice that the test sets provided by Zhu et al. (2019) are unprocessed, therefore we have to process the test samples they provided ourselves.", "Gigaword English Gigaword corpus (Napoles et al., 2012) contains 3.80M training pairs, 2K validation pairs, and 1,951 test pairs.", "We use the human-translated Chinese source sentences provided by (Duan et al., 2019) to do Zh-to-En tests.", "DUC2004 DUC2004 corpus only contains test sets.", "We use the model trained on gigaword corpus to generate summaries on DUC2004 test sets.", "We use the 500 human-translated test samples provided by (Duan et al., 2019) to do Zh-to-En tests.", "LCSTS LCSTS (Hu et al., 2015) is a Chinese summarization corpus, which contains 2.40M training pairs, 10,666 validation pairs, and 725 test pairs.", "We use 3K cross-lingual test samples provided by Zhu et al. (2019) to do Zh-to-En tests.", "CNN/DM CNN/DM (Hermann et al., 2015) contains 287.2K training pairs, 13.3K validation pairs, and 11.5K test pairs.", "We use the 3K cross-lingual test samples provided by Zhu et al. (2019) to do En-to-Zh cross-lingual tests.", "We use ROUGE-1 (unigram), ROUGE-2 (bigram), and ROUGE-L (LCS) F1 scores as the evaluation metrics, which are most commonly used evaluation metrics in the summarization task.", "For unsupervised cross-lingual summarization, we set the following baselines:", "Unified It jointly trains English and Chinese monolingual summarizations in a unified model and uses the first token of the decoder to control whether it generates Chinese or English summaries.", "Unified+CLWE It builds a unified model and adopts pre-trained unsupervised cross-lingual word embeddings.", "The cross-lingual word embeddings are obtained via projecting embeddings from source language to target language.", "We use Vecmap 2 to learn the cross-lingual word embeddings.", "For supervised cross-lingual summarization, we compare our model with (Shen et al., 2018), (Duan et al., 2019), and Zhu et al. (2019).", "We also consider the following baselines for comparison: 2 https://github.com/artetxem/vecmap Pipe-TS The Pipe-TS baseline first uses a Transformer-based translation model to translate the source text to the other language, then uses a monolingual summarization model to generate summaries.", "To make this baseline stronger, we replace the translation model with the Google translation system and name it as Pipe-TS* .", "Pipe-ST The Pipe-ST baseline first uses a monolingual summarization model to generate the summaries, then uses a translation model to translate the summaries to the other language.", "We replace the translation model with the Google translation system as Pipe-ST* .", "Pseudo The Pseudo baseline directly trains a cross-lingual summarization model by using the pseudo parallel cross-lingual summarization data.", "XLM Pretraining This method is proposed by Lample and Conneau (2019), where they pretrain the encoder and decoder on large-scale multilingual text using causal language modeling (CLM), masked language modeling (MLM), and translation language modeling (TLM) tasks.", "3 6.5 Implementation Details For transformer architectures, we use the same con-figuration as Vaswani et al. (2017), where the number of layers, model hidden size, feed-forward hidden size, and the number of heads are 6, 512, 1024, and 8, respectively.", "We set g = d model = 512 in ScaleNorm.", "The mapper is a linear layer with a hidden size of 512, and the discriminator is a two-layer linear layer with a hidden size of 2048.", "We use the NLTK 4 tool to process English texts and use jieba 5 tool to process Chinese texts.", "The vocabulary size of English words and Chinese words are 50,000 and 80,000 respectively.", "We set 1 = 1 , 2 = 5 , 3 = 2 in unsupervised training and 1 = 0 .", "5 , 2 = 5 in supervised training according to the performance of the validation set.", "We set dis iters = 5 in Alg.", "1.", "3 This baseline was suggested by the reviewers, and the results are only for reference since it additionally uses a lot of pre-training text.", "We use Adam optimizer (Kingma and Ba, 2014) with = (0 . 9 , 0 . 98) for optimization.", "We set the learning rate to 3 e 4 and adopt the warm-up learning rate (Goyal et al., 2017) for the first 2,000 steps, the initial warm-up learning is set to 1 e 7 .", "We adopt the dropout technique and set the dropout rate to 0.2.", "The experiment results of unsupervised crosslingual summarization are shown in Table 2, and it can be seen that our model significantly outperforms all baselines by a large margin.", "By training a unified model of all languages, the model's crosslingual transferability is still poor, especially for the gigaword dataset.", "Incorporating cross-lingual word embeddings into the unified model can improve the performance, but the improvement is limited.", "We think this is due to that the cross-lingual word embeddings learned by Vecmap cannot leverage the contextual information.", "Due to space limitations, we present case studies in the Appendix.", "After checking the generated summaries of the two baseline models, we find that they can generate readable texts, but the generated texts are far away from the theme of the source text.", "This indicates that the encoder and decoder of these baselines have a large gap, such that the decoder cannot understand the output of the encoder.", "We also find that summaries generated by our model are obviously more relevant, demonstrating that aligned representations between languages are helpful.", "But we can also see that there is still a gap be-Method LCSTS Gigaword R1 R2 RL R1 R2 RL Unified 13.52 1.35 10.02 5.25 0.87 2.09 Unified+CLWE 14.02 1.49 12.10 6.51 1.07 2.92 Ours 20.11 5.46 16.07 13.75 4.29 11.82 Table 2: Rouge F1 scores (%) on unsupervised crosslingual summarization tests.", "tween our unsupervised results (Table 2) and supervised results (Table 1), indicating that our model has room for improvement.", "The experiment results of supervised cross-lingual summarization are shown in Table 1.", "Due to the lack of corpus for training Chinese long document summarization model, we do not experiment with the Pipe-TS model on the CNN/DM dataset.", "By comparing our results with pipeline-based or pseudo baselines, we can find that our model outperforms all these baselines in all cases.", "Our model achieves an improvement of 0 3 Rouge scores over the Pseudo model trained directly with translated parallel cross-lingual corpus, and 1.5 4 Rouge-1 scores over those pipeline models.", "We also observe that models using the Google translation system all perform better than models using the Transformer-based translation system.", "This may because the Transformer-based translation system will bring some UNK tokens, and the transformer-based translation system trained by ourselves does not perform as well as the Google translation system.", "In addition, Pipe-ST models perform better than Pipe-TS models, which is con-Method Info.", "sistent with the conclusions of previous work.", "This is because (1) the translation process may discard some informative clauses, (2) the domain of the translation corpus is different from the domain of summarization corpus, which will bring the domain discrepancy problem to the translation process, and (3) the translated texts are often translationese (Graham et al., 2019).", "The Pseudo model performs better than Pipe-TS models but performs similarly as Pipe-ST models.", "By comparing our results with others, we can find that our model outperforms Shen et al. (2018) and Duan et al. (2019) on both gigaword and DUC2004 test sets, and it outperforms Zhu et al. (2019) on the LCSTS dataset.", "But our Rouge scores are lower than Zhu et al. (2019) on the CNN/DM dataset, especially the Rouge-2 score.", "However, our model performs worse than pre-trained models.", "The human evaluation was also performed.", "Since we cannot get the summaries generated by other models, we only compare with our baselines in the human evaluation.", "We randomly sample 50 examples from the gigaword (Zh-to-En) test set and 20 examples from the CNN/DM (En-to-Zh) test set.", "We ask five volunteers to evaluate the quality of the generated summaries from the following three aspects: (1) Informative : how much does the generated summaries cover the key content of the source text?", "(2) Conciseness : how concise are the generated summaries?", "(3) Fluency : how fluent are the generated summaries?", "The scores are Method Gigaword CNN/DM R1 R2 RL R1 R2 RL Ours (supervised) 32.04 13.60 27.91 38.12 16.76 33.86 w/o summ.", "between 1-5, with 5 being the best.", "We average the scores and show the results in Table 3 and Table 4.", "Our model exceeds all baselines in informative and conciseness scores, but get a slightly lower fluency score than Pipe-ST*.", "We think this is because the Google translation system has the ability to identify grammatical errors and generate fluent sentences.", "To study the importance of different components of our model, we also test some variants of our model.", "For supervised training, we set variants of (1) without (monolingual) summarization loss, (2) without mappers 6 , (3) replace ScaleNorm with LayerNorm, (4) without pre-trained monolingual steps, and (5) unshare the encoder and decoder.", "For unsupervised training, we additionally set variants without cyc-reconstruction loss or back-translation loss.", "The results of ablation tests of supervised and unsupervised cross-lingual summarization are shown in Table 5 and Table 6, respectively.", "It seems that the role of mappers does not seem obvious in the case of supervised training.", "We speculate that this may be due to the joint training of monolingual and cross-lingual summarizations, and directly constraining the context representations before mapping can also yield shared (aligned) representations.", "But mappers are crucial for unsupervised cross-lingual summarization.", "For supervised cross-lingual summarization, except for mappers, all components contribute to the improvement of the performance.", "The performance decreases after removing any of the components.", "For unsupervised cross-lingual summarization, all components contribute to the improvement of the performance and the mappers and shared encoder/decoder are key components.", "6 In this case, we directly constrain the parallel z x and z y to be the same.", "Early researches on cross-lingual abstractive summarization are mainly based on the monolingual summarization methods and adopt different strategies to incorporate bilingual information into the pipeline model (Leuski et al., 2003; Orasan and Chiorean, 2008; Wan et al., 2010; Wan, 2011; Yao et al., 2015).", "Recently, some neural cross-lingual summarization systems have been proposed for cross-lingual summarization (Shen et al., 2018; Duan et al., 2019; Zhu et al., 2019).", "The first neural-based crosslingual summarization system was proposed by Shen et al. (2018), where they first translate the source texts from the source language to the target language to form the pseudo training samples.", "A teacher-student framework is adopted to achieve end-to-end cross-lingual summarization.", "Duan et al. (2019) adopt a similar framework to train the cross-lingual summarization model, but they translate the summaries rather than source texts to strengthen the teacher network.", "Zhu et al. (2019) propose a multi-task learning framework by jointly training cross-lingual summarization and monolingual summarization (or machine translation).", "They also released an English-Chinese cross-lingual summarization corpus with the aid of online translation services.", "Learning cross-lingual representations is a beneficial method for cross-lingual transfer.", "Conneau et al. (2017) use adversarial networks to learn mappings between languages without supervision.", "They show that their method works very well for word translation, even for some distant language pairs like English-Chinese.", "Lample et al. (2018) learn word mappings between languages to build an initial unsupervised machine translation model, and then perform iterative back-translation to fine-tune the model.", "Aldarmaki and Diab (2019) propose to directly map the averaged embeddings of aligned sentences in a parallel corpus, and achieve better performances than word-level mapping in some cases.", "In this paper, we propose a framework that jointly learns to align and summarize for neural crosslingual summarization.", "We design training objectives for supervised and unsupervised cross-lingual summarizations, respectively.", "We also propose methods to enhance the isomorphism and crosslingual transfer between languages.", "Experimental results show that our model outperforms supervised baselines in most cases and outperforms unsupervised baselines in all cases.", "This work was supported by National Natural Science Foundation of China (61772036), Tencent AI Lab Rhino-Bird Focused Research Program (No.JR201953), and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).", "We thank the anonymous reviewers for their helpful comments.", "Xiaojun Wan is the corresponding author." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "objective", "objective", "method", "objective", "result", "method", "objective", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "result", "other", "other", "other" ]
[ "This paper proposes a simple and effective algorithm for incorporating lexical constraints in neural machine translation.", "Previous work either required re-training existing models with the lexical constraints or incorporating them during beam search decoding with significantly higher computational overheads.", "Leveraging the flexibility and speed of a recently proposed Levenshtein Transformer model (Gu et al., 2019), our method injects terminology constraints at inference time without any im-pact on decoding speed.", "Our method does not require any modification to the training procedure and can be easily applied at runtime with custom dictionaries.", "Experiments on English-German WMT datasets show that our approach improves an unconstrained baseline and previous approaches.", "Neural machine translation (NMT) systems can generate higher-quality translations than phrase-based MT systems, but they come at the cost of losing control over how translations are generated.", "Without the explicit link between the source and the target vocabulary, enforcing specific terminological translation in domain-specific settings becomes painfully difficult for NMT systems.", "Consider an example where we have a Chinese-English NMT system trained for the E-commerce domain, and there is no prior knowledge of the brand name in the training data, the system would translate the input term literally as red ( ) rice ( ) instead of Redmi .", "In such scenarios, machine translation users often maintain in-domain dictionaries to ensure that specific information is translated accurately and consistently.", "A line of previous work that tried to address this problem required re-training the NMT models with lexical constraints, either by a placeholder mechanism (Crego et al., 2016) or via code-mixed training (Song et al., 2019; Dinu et al., 2019).", "However, they do not reliably guarantee the presence of the constraints at test time.", "Another approach focused on constrained beam search decoding (Hokamp and Liu, 2017; Post and Vilar, 2018; Hu et al., 2019).", "Although the latter approach has higher control over the target constraint terms, they significantly slow down the decoding.", "Different from the existing line of work, we invoke lexical constraints using a non-autoregressive approach.", "1 To do this, we use Levenshtein Transformer (LevT) (Gu et al., 2019), an edit-based generation model that performs deletion and insertion operations during inference iteratively.", "LevT achieves substantially higher inference speed compared to beam search without affecting quality.", "We add a constraint insertion step in LevT decoding to seamlessly decode the target language sequence while adhering to specific lexical constraints, achieving the same speed as standard LevT decoding.", "Previous approaches integrated lexical constraints in NMT either via constrained training or decoding.", "Crego et al. (2016) replaced entities with place-holders that remained unchanged during translation and placed them back in a post-processing step.", "Song et al. (2019) trained a Transformer (Vaswani et al., 2017) model by augmenting the data to include the constraint target phrases in the source sentence.", "Dinu et al. (2019) proposed a similar idea and additionally used factored training.", "Other approaches proposed enforcement of lexical constraints during inference with various improvements to constraint-aware beam search, such as 1 In literature, non-autoregressive NMT decoding mostly refers to those that do not generate tokens sequentially, although they perform iterative refinement (Lee et al., 2018).", "grid beam search (Hokamp and Liu, 2017), dynamic beam allocation (Post and Vilar, 2018), and its optimized vectorized version (Hu et al., 2019).", "Hasler et al. (2018) built finite-state acceptors to integrate constraints in a multi-stack decoder.", "These lexically-constrained decoding approaches rely on autoregressive inference that generates one target token at a time, which makes it difficult to parallelize the decoder and monotonically increases decoding time.", "While being mostly effective at forcing the inclusion of pre-specified terms in the output, these approaches further slow down the beam search process.", "Post and Vilar (2018) reported 3 slow down compared to standard beam search.", "Non-autoregressive neural machine translation (NAT) (Gu et al., 2018) attempts to move away from the conventional autoregressive decoding.", "Such a direction enables parallelization during sequence generation that results in lower inference latency.", "Recent NAT approaches treat inference as an iterative refinement process, first proposed by Lee et al. (2018).", "Following this direction, it is intuitive to perform decoding using edit operations, such as insertion (Stern et al., 2019) or both insertion and deletion (LevT, Gu et al. (2019)).", "The LevT model has been shown to outperform existing refinement-based models, such as Ghazvininejad et al. (2019) and performs comparably to autoregressive Transformer models.", "Our method integrates lexical constraints in NAT decoding utilizing the flexibility, speed, and performance of LevT.", "Levenshtein Transformer (LevT) (Gu et al., 2019) has an encoder-decoder framework based on Transformer architecture (Vaswani et al., 2017) with multi-headed self-attention and feed-forward networks.", "Unlike token generation in a typical Transformer model, LevT decoder models a Markov Decision Process (MDP) that iteratively refines the generated tokens by alternating between the insertion and deletion operations.", "After embedding the source input through a Transformer encoder block, the LevT decoder follows the MDP formulation for each sequence at the k -th iteration y k = ( y 1 , y 2 , ..., y n ) , where y 1 and y n are the start ( <s> ) and end ( </s> ) symbols.", "The decoder then generates y k +1 by performing deletion and insertion operations via three classifiers that run sequentially: Constraint Insertion Placeholder Classifier Token Classifier <s> Nevada hat bereits ein Pilot@@ projekt abgeschlossen .", "1. Deletion Classifier , which predicts for each token position whether they should be kept or deleted, 2. Placeholder Classifier , which predicts the number of tokens to be inserted between every two consecutive tokens and then inserts the corresponding number of placeholder [PLH] tokens, 3. Token Classifier , which predicts for each [PLH] token an actual target token.", "Each prediction is conditioned on the source text and the current target text.", "The same Transformer decoder block is shared among the three classifiers.", "Decoding stops when the current target text does not change, or a maximum number of refinement iterations has been reached.", "The LevT model is trained using sequence-level knowledge distillation (Kim and Rush, 2016) from a Transformer teacher whose beam search output is used as ground truth during training.", "We refer the readers to (Gu et al., 2019) for a detailed description of the LevT model and training routine.", "For sequence generation, the LevT decoder typically starts the first iteration of the decoding process with only the sentence boundary tokens y 0 = <s></s> .", "To incorporate lexical constraints, we populate the y 0 sequence before the first deletion operation with the target constraints, as shown in Figure 1. The initial target sequence will pass through the deletion, placeholder, and insertion classifiers sequentially, and the modified sequence will be refined for several iterations.", "The decoding steps are explained in detail below.", "Constraint Insertion More formally, given a list of m target constraints C 1 , C 2 , ..., C m , where each constraint C i is possibly a multi-token phrase C i = w i 1 , w i 2 , ..., w i | C i | , we insert the constraints into the decoding sequence before the deletion operation to form y 0 = <s> C 1 C 2 ...", "C n </s> .", "Deletion Operation Next, y 0 passes through the deletion classifier to decide which w ij token to remove.", "If the deletion operation is allowed on the constraint tokens, the presence of each constraint in the final output is not guaranteed, especially when the supplied constraints are out of context for the decoder.", "To mitigate this problem, we optionally disallow the deletion operation on the constraint tokens by introducing a constraint mask to indicate the positions of constraint tokens in the sequence.", "We forcefully set the deletion classifier prediction for all positions in this mask to keep.", "The positions in this mask are re-computed accordingly after each deletion and insertion operation.", "Insertion Operation Finally, the y 0 passes through the placeholder classifier to predict the number of tokens to be inserted and generate the corresponding number of [PLH] tokens and the token classifier assigns an actual target token for every [PLH] token.", "Each constraint may contain multiple tokens, and the [PLH] tokens may be inserted between the tokens from the same constraint.", "To prevent this from happening and to keep each constraint intact, we optionally prohibit inserting [PLH] within a multi-token constraint by constraining 0 to the number of such placeholders.", "In Figure 1, our constraint insertion is executed at the first pass, and subsequent iterations start from deletion (indicated by a loop in the figure).", "We note that this step happens only at inference; during training, the original LevT training routine is carried out without the constraint insertion.", "per-2 https://github.com/pytorch/fairseq/commit/2d51e04", "form lexically-constrained decoding.", "All Transformer blocks in our LevT model follow the base configuration that contains 6 layers with 8 attention heads each, with a model size d model = 512 and feed-forward layer size d = 2048 ; the source and target embeddings share the same vocabulary.", "The LevT model is trained using knowledge distillation routine using Transformer base output released by Gu et al. (2019).", "We leave more experimental details in the Appendix.", "We evaluate our approach on the WMT'14 English-German (En-De) news translation task (Bojar et al., 2014) with En-De bilingual dictionary entries extracted from Wiktionary 3 following Dinu et al. (2019), by matching the source and target phrases of the dictionary entries in the source and target sentences, respectively.", "We also evaluate our approach on two En-De test sets released by Dinu et al. (2019) to compare our approach against previous work on applying lexical constraints in NMT (Post and Vilar, 2018; Dinu et al., 2019).", "The two test sets are subsets of WMT'17 En-De test set (Bojar et al., 2017) extracted using Wiktionary and the Interactive Terminology for Europe (IATE) terminology database, 4 respectively.", "Both the WMT'14 and WMT'17 EnDe datasets are tokenized using the Moses tokeniza-tion scripts and segmented into sub-word units using byte-pair encoding (Sennrich et al., 2016).", "We evaluate the systems using BLEU scores (Pa-pineni et al., 2002) and term usage rate (Term%), which is defined as the number of constraints generated in the output divided by the total number of the given constraints.", "3 https://dumps.wikimedia.org/enwiktionary/ 4 https://iate.europa.eu/", "operation and forcefully disallowing deletion of the constraints ( + No Del. ) and", "(iv) disallowing [PLH] insertion between tokens from the same constraint ( + No Ins. ).", "Table 2 shows an example where prohibiting constraint deletion prevents catastrophic removal of the lexical constraint.", "We report results on both the filtered test set for sentence pairs that contain at least one target constraint (Constr., 454 sentences) and the full test set (Full, 3,003 sentences).", "The constraint insertion operation increases the term usage rate from about 80% to over 94%, and further disallowing deletion of the constraints achieves above 99% term usage.", "Prohibiting insertion between each constraint's tokens guarantees a 100% term usage.", "For sentences with lexical constraints, we observe a statistically significant improvement of 0.6 BLEU ( p -value < 0.05) based on bootstrap resampling (Koehn, 2004).", "On the full test set, the BLEU improves by 0.1.", "The small margin of improvement is because only 1% of the total reference tokens are constraint tokens.", "Unlike previous work that sacrificed decoding speed to enforce lexical constraints (e.g. Hasler et al., 2018; Post and Vilar, 2018), there is no significant difference in the number of sentences decoded per second between the unconstrained and the lexically constrained LevT models.", "Table 3 presents the comparison to two previous approaches: constrained decoding with dynamic beam allocation (Post and Vilar, 2018) and data augmentation by replacing the source terms with target constraints during training (Dinu et al., 2019).", "We refer to them as POST 18 and DINU 19, respectively, in Table 3. We evaluate each approach on the WMT'17 En-De test set with constraint terms from Wiktionary and IATE dictionaries.", "Note that our baseline LevT model with Transformer blocks of 6 layers is superior to that of Dinu et al. (2019) who used a 2-layer configuration.", "Despite having a stronger baseline, we obtain higher absolute BLEU Wiktionary IATE Term% BLEU Term% BLEU Previous work Baseline Trans.", "score improvements (0.96 and 1.16 BLEU on Wiktionary and IATE, respectively) and achieved 100% term usage.", "We report additional experiments on WMT'16 Romanian-English news translation task (Bojar et al., 2016) in the Appendix.", "To analyze if our approach inserts the constraints at correct positions, we compare it to a baseline approach of randomly inserting the constraint terms in the output of our baseline LevT model.", "Note that we only insert those constraints that are not already present in the output.", "Although this results in a 100% term usage, we observe that the BLEU score drops from 29.9 to 29.3 on the Constr.", "WMT'14 test set, whereas our approach improves the BLEU score.", "The LevT model with our proposed constraint insertion seems to inherently have the ability to place the constraints at correct positions in the target sentence.", "Although prohibiting constraint deletion improves term usage in the final translation and achieves higher BLEU scores, it limits the possibility of reordering when there is more than one constraint during inference.", "For the English-German test sets we evaluated on, 97-99% of the target constraints appear in the same order as the source terms.", "This issue may become more apparent in language pairs with more distinct syntactic differences between the source and target languages.", "In practice, most of the entries in terminology databases (Wiktionary, IATE, etc.) are often nominal.", "Thus, the reordering of lexical constraints boils down to whether the source and target language share the same argument-predicate order.", "5 We will explore potential strategies to reorder constraints dynamically in future work.", "We proposed a non-autoregressive decoding approach to integrate lexical constraints for NMT.", "Our constraint insertion step is simple and we have empirically validated its effectiveness.", "The approach demonstrated control over constraint terms in target translations while being able to decode as fast as a baseline Levenshtein Transformer model, which achieves significantly higher decoding speed than traditional beam search.", "6 In addition to the terminological lexical constraints discussed in this work, future work can potentially modify insertion or selection operations to handle target translations of multiple forms; this can potentially disambiguate the morphological variants of the lexical constraints." ]
[ "objective", "abstain", "abstain", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method" ]
[ "Individual differences in speakers are reflected in their language use as well as in their interests and opinions.", "Characterizing these differences can be useful in human-computer interaction, as well as analysis of human-human conversations.", "In this work, we introduce a neural model for learning a dynamically updated speaker embedding in a conversational context.", "Initial model training is unsupervised, using context-sensitive language generation as an objective, with the context being the conversation history.", "Further fine-tuning can leverage task-dependent supervised training.", "The learned neural representation of speakers is shown to be useful for content ranking in a socialbot and dialog act prediction in human-human conversations.", "1 1 Introduction Representing language in context is key to improving natural language processing (NLP).", "There are a variety of useful contexts, including word history, related documents, author/speaker information, social context, knowledge graphs, visual or situational grounding, etc.", "This paper addresses the problem of modeling the speaker.", "Accounting for author/speaker variations has been shown to be useful in many NLP tasks, including language understanding (Hovy and Sgaard, 2015; Volkova et al., 2013), language generation (Mirkin et al., 2015; Li et al., 2016), human-computer dialog policy (Bowden et al., 2018), query completion (Jaech and Ostendorf, 2018; Shokouhi, 2013), comment recommendation (Agarwal et al., 2011) and more.", "In this work, we specifically focus on dialogs, including both human-computer (social-bot) and human-human conversations.", "While many studies rely only on discrete meta-data and/or demographic information, such information is not always available.", "Thus, it is of interest to learn about the speaker from the language directly, as it relates to the person's interests and speaking style.", "Motivated by the success of unsupervised contextualized representation learning for words and documents (Mikolov et al., 2013; Kiros et al., 2015; McCann et al., 2017; Peters et al., 2018; Devlin et al., 2019), our approach is to use unsupervised learning with a neural model of a speaker's dialog history.", "The model uses latent speaker mode vectors for representing a speaker turn as in (Cheng et al., 2017), which provides a framework for analysis of what the model learns about speaking style.", "Further, the model is structured to allow a dynamic update of the speaker vector at each turn in a dialog, in order to capture changes over time and improve the speaker representation with added data.", "The speaker embeddings can be used as context in conversational language understanding tasks, e.g., as an additional input in dialog policy prediction in human-computer dialogs or in understanding dialog acts in human-human dialogs.", "In the supervised training of such tasks, the speaker model can be fine-tuned.", "This work makes two primary contributions.", "First, we propose a neural model for learning dynamically updated speaker embeddings in conversational interactions.", "The model training is unsupervised, relying on only the speaker's conversation history rather than meta information (e.g., age, gender) or audio signals which may not be available in a privacy-sensitive situation.", "The model also has a learnable component for analyzing the latent modes of the speaker, which can be helpful for aligning the learned characteristics of a speaker with the human-interpretable factors.", "Second, we use the learned dynamic speaker embed-Speaker State Tracker 1 0 +1 Latent Mode Analyzer 1 Global Mode Vectors Attention , ,1 , +1 ,1 ,0 ,0 , Speaker Language Predictor ,1 , ,1 </s> Conversation-LevelTurn-Level Figure 1: The dynamic speaker model.", "dings in two representative tasks in dialogs: predicting user topic decisions in socialbot dialogs, and classifying dialog acts in human-human dialogs.", "Empirical results show that using the dynamic speaker embeddings significantly outperforms the baselines in both tasks.", "In the public dialog act classification task, the proposed model achieves the state-of-the-art results.", "In this section, we start with an overview of the proposed model for learning speaker embeddings that are dynamically refined over the course of a conversation.", "Details about individual components are described in subsequent subsections.", "The model is based on two motivations.", "First, a speaker's utterances reflect intents, speaking style, etc.", "Thus, we may build speaker embeddings by analyzing latent modes that characterize utterances in terms of such characteristics, apart from topic-related interests a user might have.", "Second, the information about a speaker is accumulated as the conversation evolves, which allows us to gradually refine and update the speaker embeddings.", "The speaker embeddings can be directly used as features or fine-tuned based on the downstream tasks.", "We design the dynamic speaker model to focus on learning cues from the speaker's utterances, and leave the modeling of different speaker-addressee interactions for supervised downstream tasks.", "The model consists of three components as illustrated in Fig. 1. First, a latent mode analyzer reads in an utterance and analyzes its latent modes.", "It processes the speaker's turns independently of each other and builds a local speaker mode vector for each turn.", "To accumulate speaker information as the conversation evolves, we build a speaker state tracker that maintains speaker states at individual turns.", "At each turn, it takes two input vectors to update the speaker state:", "1) the local speaker mode vector for the current turn from the latent mode analyzer, and", "2) the speaker state at the previous turn from the tracker itself.", "Finally, we employ a speaker language predictor to drive the learning of the latent model analyzer and the speaker state tracker.", "It reconstructs the utterance using the corresponding speaker state.", "Intuitively, the speaker language predictor models overall linguistic regularities itself and uses the speaker state to supply information related to speaker characteristics.", "For sequence modeling in all three components, we use the long short-term memory (LSTM) recurrent neural network (Hochreiter and Schmidhuber, 1997).", "In our experiments, the three components are trained jointly.", "At each turn t , the latent mode analyzer constructs a local speaker mode vector u t R c that captures salient characteristics of the speaker's current utterance for use in the dynamic speaker model.", "First, the utterance word sequence w t, 1 , , w t,N t is mapped to an embedding sequence, where w t,n is represented with w t,n R d according a lookup with dictionary W R |V| d associated with vocabulary V .", "Then, the latent mode analyzer goes through two stages to construct u t .", "In the first stage, a bi-directional LSTM (Bi-LSTM), which consists of a forward LSTM and a backward LSTM, is used to encode the word embedding sequence into a fixed-size utterance summary vector s t R 2 m , where m is the dimension of the hidden layer in the forward and backward LSTMs.", "Formally, the forward LSTM computes its hidden states as e Ft,n = g F ( w t,n , e Ft,n 1 ) R m for n = 1 , . . . , N t , where g F ( , ) denotes the forward LSTM function.", "The backward LSTM computes its hidden states e Bt,n R m similarly.", "The initial hidden states e Ft, 0 and e Bt,N t +1 are set to zeros.", "The summary vector s t is the concatenation of the two final hidden states, s t = [ e Ft,N t , e Bt, 1 ] .", "In the second stage, the utterance summary vector s t is compared with K global mode vectors u 1 , . . . , u K R c which are learned as part of the model.", "The association score a t,k between s t and u k is computed using the dot-product attention mechanism (Vaswani et al., 2017) as follows, a t,k = exp( (cid:104) Ps t , Qu k (cid:105) ) (cid:80) Kk (cid:48) =1 exp( (cid:104) Ps t , Qu k (cid:48) (cid:105) ) , (1) where P R c 2 m and Q R c c are learnable weights, and (cid:104) , (cid:105) indicates the dot-product of two vectors.", "The local speaker mode vector is then constructed as u t = (cid:80) Kk =1 a t,k u k .", "The speaker state tracker provides a dynamic summary of speaker language features observed in the conversation history, using an LSTM to encode the sequence of local speaker mode vectors u t, 1 , , u t,N t .", "At turn t , this LSTM updates its hidden state h t R m using the local speaker mode vector u t and its previous hidden state h t 1 R m , i.e., h t = g S ( u t , h t 1 ) , where g S ( , ) is the speaker LSTM function.", "The hidden state h t provides the speaker state vector at turn t .", "The speaker language predictor is a conditional LSTM language model (LM) that reconstructs the word sequence in the current turn.", "Language modeling is a way to provide a training signal for unsupervised learning that models the conditional probability Pr( w t,n | w t,<n ) , where w t,<n denotes all preceding words of w t,n in the turn t .", "The speaker language predictor uses the same dictionary W for word embeddings as the latent mode analyzer to represent words at time t .", "The initial hidden state d t, 0 R m of the LSTM is set to tanh( Lh t ) , where L R m m is a learnable matrix and tanh( ) is the hyperbolic tangent function.", "Subsequent LSTM hidden states are computed as d t,n = g LM ( r I ( w t,n 1 , h t ) , d t,n 1 ) , for n = 1 , . . . , N t + 1 , where r I ( w t,n 1 , h t ) = R Iw w t,n 1 + R Ih h t is a linear transformation with learned parameters R Iw R m d and R Ih R m m , g LM ( , ) is a forward LSTM function, and w t, 0 is the word embedding for the start-of-sentence token.", "By injecting the speaker state vector at every time step n in the turn t , the model is more likely to favor directly using the speaker state vector (vs. the word history) for predicting the speaker language.", "The conditional probability is then computed as Pr( w t,n | w t,<n ) = softmax( V r O ( h t , d t,n )) , (2) where V R |V| m is the weight matrix, and r O ( h t , d t,n ) = R Oh h t + R Od d t,n is another linear function with learnable parameters R Oh , R Od R m m .", "The last word w t,N t +1 is always the end-of-sentence token.", "The training objective for a given conversation is the log-likelihood (cid:80) t (cid:80) n log Pr( w t,n | w t,<n ) , where the conditional probability is defined in (2).", "The Adam optimizer (Kingma and Ba, 2015) is used with a configuration of 1 = 0 .", "9 and 2 = 0 .", "97 .", "The initial learning rate is set to 0 .", "002 .", "We halve the learning rate at each epoch once the development log-likelihood decreases, and terminate the training when it decreases for the second time.", "This validation protocol is used throughout the paper for training the proposed model.", "In our experiments, the embedding dictionary W is initialized using pre-trained 300-dimensional word embeddings (Bojanowski et al., 2017) for words within the vocabulary of this resource.", "The remaining part of W and other model parameters are randomly initialized based on N (0 , 0 . 01) .", "The mode vector dimension c is set to 64 .", "We tune the number of global mode vectors K from { 16 , 32 } and the hidden layer size m from { 128 , 160 } .", "The final model is selected based on the log-likelihood on the development set.", "We first study a prediction task that estimates whether the user engaged in a socialbot conversation would accept a suggested topic.", "Specifically, we use a corpus of human-socialbot conversations collected during the 2017 Alexa Prize competition (Ram et al., 2017) from the Sounding Board system (Fang et al., 2018; Fang, 2019).", "Due to privacy concerns, the socialbot does not have access to any identity information about users.", "Also, since each device may be used by multiple users, the device address is not a reliable indicator of the user ID.", "Therefore, the ability to profile the user through one conversational interaction is desirable for guiding the socialbot's dialog policy.", "Each conversation begins with a greeting and ends when the user makes a stop command.", "The socialbot engages the user in the conversation using a wide range of content indexed by topics, where a topic corresponds to a noun or noun phrase that refers to a named entity (e.g., Google) or a concept (e.g., artificial intelligence).", "These topics are extracted using both constituency parsing results of the textual content and content meta-information.", "During the conversation, the socialbot sometimes negotiates the topic with the user using an explicit confirmation turn and records the user's binary decision (accept or reject) on the topic.", "In socialbot conversations, a system turn is always followed by a user turn and vice versa.", "We tag system turns making explicit confirmation about a topic and attach the corresponding binary user decisions with them.", "To curate the dataset for the topic decision prediction task, we use a total of 31,862 conversations with more than 5 user turns.", "On average there are around 22 user turns per conversation.", "Not every system turn makes a topic suggestion, and the average number of topic decisions per conversation is 4.5.", "We randomly split the conversations into training, development, and test sets by 3 / 1 / 1 .", "The data statistics are shown in Table 1. In our experiments, we directly use the speech recognition output of user utterances.", "The vocabulary V consists of roughly 11 K words that appear at least 5 times in the training set.", "We use a feed-forward neural network (FFNN) to make binary predictions (accept vs. reject) for individual topic suggestions.", "For each topic suggestion, the FFNN takes two inputs:", "1) an embedding x t (cid:48) for the suggested topic at system turn t (cid:48) , and", "2) a user embedding vector z t at user turn t .", "Note the model does not have information about user turns after the system turn t (cid:48) when making the prediction, i.e., the user turn t appears before the system turn t (cid:48) .", "The topic embedding x t (cid:48) 's are looked up from the embedding dictionary learned by the FFNN.", "They are initialized by averaging the embeddings of their component words using the public pre-trained 300-dimensional word embeddings (Bo-janowski et al., 2017).", "For the user embedding vector, we explore two settings that use different numbers of user turns as context.", "In both settings, topic decisions occurring in the first 5 user turns are not used for evaluations.", "Static User Embeddings : Motivated by the findings that most user characteristics can be inferred from initial interactions (Ravichander and Black, 2018), we derive a static user embedding vector for a conversation using the first 5 user turns and apply it for predicting topic decisions afterwards.", "Dynamic User Embeddings : Alternatively, we build a user embedding vector for user turn t using all previous user turns.", "Here, a topic decision for system turn t (cid:48) is aligned with its preceding user turn t .", "In our experiments, we compare different unsupervised models with our proposed dynamic speaker model.", "For both settings, all unsupervised models are pre-trained on all user turns in training conversations.", "They are fixed when training the FFNN classifier.", "The FFNN classifier is trained with the logistic loss using the Adam optimizer (Kingma and Ba, 2015).", "The training protocol is similar to that described in 2.4.", "We tune the hidden layer size from { 64 , 128 } and the number of hidden layers from { 0 , 1 } .", "The model is selected based on the loss on the development set.", "In addition, we use a user-agnostic TopicPrior baseline.", "It builds a probability lookup for each topic using its acceptance rate on the training set.", "We tune a universal probability threshold for all topics based on the development set accuracy.", "In all experiments, three evaluation metrics are used: accuracy, area under the receiver operating characteristic curve (AUC), and normalized cross-entropy (N-CE).", "N-CE is computed as the relative cross-entropy reduction of the model over the TopicPrior baseline.", "As described in 3.2, we use the first 5 user turns to derive the user embedding vector for a conversation.", "We compare our dynamic speaker model with three other unsupervised models.", "DynamicSpeakerModel : For the proposed dynamic speaker model, we concatenate the speaker state vector h t and the local speaker mode vector Model Acc AUC N-CE TopicPrior 68.8 72.5 0 UtteranceLDA 68.8 73.1 12.6 UtteranceAE 68.8 73.4 12.8 TopicDecisionEncoder 68.9 73.8 13.4 DynamicSpeakerModel 69.5 74.2 13.7 Table 2: Test set results (in %) for topic decision predictions using static user embeddings.", "u t for each of the first 5 user turns.", "Then, we apply the max-pooling operation over the 5 concatenated vectors to summarize all the information.", "The resulting vector h is used as the user embedding vector.", "UtteranceLDA : The latent Dirichlet allocation (LDA) model (Blei et al., 2003) is trained with 16 latent groups by treating all user utterances in a conversation as a document.", "2 The trained LDA model builds a 16-dimensional probability vector as the user embedding vector by loading the first 5 user turns as a single document.", "UtteranceAE : The utterance auto-encoder model is built upon the sequence auto-encoder (Dai and Le, 2015).", "We replace the original encoder by a BiLSTM that encodes the utterance at user turn t into a summary vector s t in the same way as the first stage of the latent mode analyzer described in 2.1.", "The auto-encoder is trained on all user utterances in the training data, using the same training protocol described in 2.4.", "We set the hidden layer size to 128 .", "The user embedding vector is constructed by applying the max-pooling operation over the summary vectors s 1 , . . . , s 5 for the first 5 user turns.", "TopicDecisionEncoder : This model encodes the topic decisions occurred in the first 5 user turns.", "The user embedding vector is the concatenation of two vectors.", "One is max-pooled from the topic embeddings for accepted topics, and the other for rejected topics, both include a dummy topic vector as default.", "The topic embeddings are composed by averaging the public pre-trained 300-dimensional embeddings (Bojanowski et al., 2017) for words in the topic.", "Experiment results are summarized in Table 2. The TopicPrior is a very strong predictor, with an 2 To allow the LDA model to take into account bi-grams, we replace the uni-gram token w i with its bi-gram ( w i , w i +1 ) concatenated as a single token if the bi-gram is among the top 500 frequent bi-grams.", "accuracy on par with other user embeddings.", "This indicates that the popularity-based approach is a good start for content ranking in socialbots when there is little user information.", "Nevertheless, we can still observe some improvement over the TopicPrior in terms of AUC and N-CE, which suggests using information from initial interactions reduces the uncertainty of predictions.", "The proposed dynamic speaker model performs the best among the compared models, reducing the cross-entropy by 13.7% over the TopicPrior baseline.", "Here, we use all information accumulated before the system turn of suggesting the topic to build the corresponding user embedding vector.", "Since the UtteranceLDA is not as effective based on static embedding experiments, we only consider extending UtteranceAE and TopicDecisionEncoder models for comparison here.", "DynamicSpeakerModel : The speaker state tracker in our model accumulates the user information as the conversation evolves.", "Thus, we directly concatenate the speaker state vector h t and the local speaker mode vector u t as the user embedding vector at user turn t .", "Other than using more turns, this is the same DynamicSpeakerModel configuration as in 3.3.", "UtteranceAE+LSTM : This model uses an LSTM to encode the summary vector sequence derived from the same utterance auto-encoder used in 3.3.", "The LSTM hidden states are treated as user embedding vectors at individual user turns.", "TopicDecisionLSTM : Similarly, an LSTM is used to encode the topic decision sequence.", "At each time step, the LSTM reads the concatenation of the topic embedding and the one-hot vector encoding the topic decision.", "We use the same topic embeddings as the TopicDecisionEncoder in 3.3.", "Since not every user turn is associated with a topic decision, the time steps of this LSTM are aligned to a sequence of non-consecutive user turns.", "The LSTM hidden states are treated as user embedding vectors at corresponding user turns.", "For UtteranceAE+LSTM and TopicDecisionLSTM, the hidden layer size of the LSTM is set to 128.", "While the utterance auto-encoder and topic embeddings are pre-trained, the LSTM components are jointly learned with the FFNN for composing dynamic user embeddings.", "Experiment results are shown in Table 3. The DynamicSpeakerModel performs the best.", "Comparing to results in Table 2, all three unsupervised models outperform their static counterparts, which suggests the advantage of using dynamic context for predicting user topic decisions as conversation evolves.", "Statistical significance tests of the difference in performance of two systems were conducted under both the t-test using the predicted probabilities and McNemar's test using the binary predictions.", "Under both tests, the predictions from the TopicDecisionLSTM and the DynamicSpeakerModel are highly signification ( p < . 001 ).", "Predictions from UtteranceAE + LSTM and DynamicSpeakerModel are also significantly different based on both tests ( p < . 001 ).", "First, we manually inspect the predictions from the TopicDecisionLSTM and DynamicSpeakerModel used in 3.4 and the static baseline TopicPrior in 3.3.", "Compared with TopicPrior, we find that TopicDecisionLSTM is able to utilize the semantic relatedness between neighboring topics and corresponding user decisions.", "For example, Elon Musk (the CEO) is likely to be rejected if Tesla (the company) has been rejected earlier, though both are popular topics with high acceptance rates.", "In addition, it seems that the DynamicSpeakerModel is able to make use of user reactions.", "In the anecdotal example illustrated in Table 4, the user accepts the topic Arnold Schwarzenegger which is correctly predicted by both TopicDecisionLSTM and DynamicSpeakerModel, but only the DynamicSpeakerModel correctly predicts the rejection of politics later.", "We then analyze what language features are learned by latent modes in our dynamic speaker model.", "For each mode, we extract top utterances sorted by their association scores as computed in Bot : Do you like the actor Arnold Schwarzenegger?", "(1).", "Examples from the most representative modes are provided in Appendix A. In brief, we find two separate modes related to positive and negative reactions; other modes correspond to classes of dialog acts, such as yes/no answers, topic requests and conversation-closing.", "Within topic request modes, some involve short topic phrases (e.g., holidays ) while others use complete requests (e.g. can we talk about cats ).", "Along this line, some modes are associated with relatively terse users and others with talkative users.", "These findings indicate that our model cpatures various user characteristics that might be useful for predicting their interaction preferences.", "Dialog act analysis is widely used for conversations, which identifies the illocutionary force of a speaker's utterance following the speech act theory (Austin, 1975; Searle, 1969).", "In this section, we apply the proposed dynamic speaker model to the dialog act classification task.", "We use the Switchboard Dialog Act Corpus (SwDA), which has dialog act annotations on two-party human-human speech conversations (Juraf-sky et al., 1997; Stolcke et al., 2000).", "In total, there are 1155 open-domain conversations with manual transcripts.", "Following recent work, we use 1115 conversations for training, 19 for testing, and the rest 21 for development.", "3 The original fine-grained dialog act labels are mapped to 42 3 The training and test split files are downloaded from https://web.stanford.edu/jurafsky/ws97/ .", "classes.", "4 For this set of experiments, we use the golden segmentation and manual transcripts provided in the dataset.", "Motivated by the recent success of unsupervised models (Peters et al., 2018; Devlin et al., 2019), we also study whether the dynamic speaker model can benefit from training on external unlabelled data.", "Thus, we use speech transcripts from 5850 conversations from the Fisher English Training Speech Part 1 Transcripts (Cieri et al., 2004), which (like Switchboard) consists of two-party human-to-human telephone conversations but without annotations for dialog acts.", "We use an attention-based LSTM tagging model for the dialog act classification.", "As shown in Fig. 2, the tagging LSTM is stacked on two speaker state trackers.", "Note the two trackers share the same parameters as well as the underlying latent mode analyzer and speaker language predictor.", "They generate speaker embeddings by tracking corresponding speakers separately.", "Let ( t ) and ( t ) denote the mappings from the global turn index t to the speaker-specific turn indices for speaker A and speaker B, respectively.", "The mapping returns a null value if the turn t is not associated with the corresponding speaker.", "The speaker state vectors are used as the input to the tagging LSTM for corresponding turns, i.e., x t = I ( h A ( t ) , h B ( t ) ) where I ( , ) is a switcher that chooses h A ( t ) or h B ( t ) depending on whether ( t ) 4 Dialog act labels are mapped using scripts from http://compprag.christopherpotts.net/swda.html .", "Utterances labelled as segment are merged with corresponding previous utterance by the same speaker.", "and ( t ) return a non-null value.", "The tagging LSTM also maintains a dictionary of L dialog act vectors g 1 , . . . , g L .", "The dialog act probabilities y t RL at turn t are computed using the dot-product attention mechanism, i.e., y t = f ( z t , [ g 1 , . . . , g L ]) , where f ( , ) is defined as in (1), and z t is the hidden state vector of the LSTM.", "The tagging LSTM computes hidden states as z t +1 = g DA (cid:0) r DA ( g t , x t +1 ) , z t (cid:1) where g t = (cid:80) Ll =1 y t,l g l , g DA ( , ) is the LSTM function, and r DA ( , ) is a linear function with learnable parameters.", "In this way, both the history dialog act predictions and the utterance information are encoded in the hidden states.", "The training objective of the tagging LSTM is the sum of the cross-entropy between the dialog act label and the probabilities y t at each turn.", "The training configuration is the same as the topic decision classifier described in 3.2.", "We tune the size of hidden states z t and dialog act embeddings g l from { 64 , 128 } with arbitrary combinations, and vary the number of LSTM hidden layers from { 1 , 2 } .", "The best model is selected according to the development set accuracy.", "In our experiments, we compare three settings for using the dynamic speaker model.", "In the pre-train setting, the dynamic speaker model is trained on the SwDA data without the dialog act labels.", "We then freeze the model when training the tagging LSTM.", "In contrast, in the pretrain + fine-tune setting, the dynamic speaker model is fine-tuned together with the tagging LSTM.", "Finally, in the pre-train w/Fisher + fine-tune setting, the dynamic speaker model is pre-trained on the combination of SwDA and Fisher datasets, and then fine-tuned together with the tagging LSTM on the SwDA dataset.", "For all three settings, we use the same vocabulary V of size 21K which combines all tokens from the SwDA training set and those appearing at least 5 time in the Fisher corpus.", "We compare our results to best published results.", "In (Kalchbrenner and Blunsom, 2013), a convolutional neural network (CNN) is used to encode utterances.", "A recurrent neural network (RNN) is then applied on top of the CNN to encode both utterances and speaker label information for predicting the dialog acts.", "Ji et al. (2016) propose a discourse-aware RNN LM by treating Model Acc (%) (Kalchbrenner and Blunsom, 2013) 73.9 (Tran et al., 2017a) 74.2 (Tran et al., 2017b) 74.5 (Tran et al., 2017c) 75.6 (Ji et al., 2016) 77.0 pre-train 75.6 pre-train + fine-tune 77.2 pre-train w/ Fisher + fine-tune 78.6 Table 5: Test set accuracy for SwDA dialog act classification.", "the dialog act as a conditional variable to the LM.", "Tran et al. (2017a,b,c) focus on building hierarchies of RNNs to model the dialog context using previous utterances or dialog act predictions.", "Results from (Lee and Dernoncourt, 2016) and (Liu et al., 2017) are not directly comparable due to different experiment settings.", "Experiment results are summarized in Table 5.", "Our pre-train setting performs on par with previous state-of-the-art supervised models except (Ji et al., 2016).", "Fine-tuning significantly improves the performance and allows the model to achieve a similar accuracy as (Ji et al., 2016).", "The best result is achieved by pre-training the dynamic speaker model with both SwDA and Fisher datasets.", "The improvement of pre-train w/ Fisher + fine-tune is statistically significant over pre-train + fine-tune based on McNemar's test ( p < . 001) .", "This illustrates the advantage of the unsupervised learning approach for the proposed model as it can exploit a large amount of unlabelled data.", "We analyze the latent modes learned on SwDA using the same approach as in 3.5.", "Again, specific examples are included in Appendix A. Overall, there are several modes corresponding to coarse-grained dialog acts, such as statements, questions, agreement, backchannel and conversation-closing.", "Many modes characterize statements, probably due to their high relative frequency in the corpus.", "Among the statement modes, there are two distinct groups, one containing multiple filled pauses, such as uh, you know, well , and the other one with because -clauses.", "The fact that coarse-grained dialog act information is partly encoded in the modes may be helping with recognizing the dialog act.", "In addition, we use the speaker gender information available in the SwDA data to determine whether the latent modes in the dynamic speaker model pick up gender-related language variation.", "Specific examples and statistics are included in Appendix B. The Cohen-d score (Cohen, 1988) is used to measure the strength of the difference between association score distributions of male vs. female utterances for individual modes.", "Based on the Cohen-d score, we identified two modes that have a strong association with male speakers, and two with female speakers.", "All have significantly different ( p < 0 . 001 ) distributions of association scores for female vs. male speakers using Mann-Whitney U test.", "In the top associated utterances for the male modes, we find utterances with several filled pauses, which has been found to be indicative of male speakers in previous work on Switchboard (Boulis and Ostendorf, 2005).", "The female modes are mostly agreement, acknowledgement and backchannel, which aligns with a popular sociolinguistic theory that females are more responsive (Coates, 1998).", "Based on this, we conclude that some speaker gender language variations are indeed captured by the learned modes.", "As reviewed by Zukerman and Litman (2001), user modeling for conversational systems has a long history.", "The research can be tracked back to the GRUNDY system (Rich, 1979) which categorizes users in terms of hand-crafted sets of user properties for book recommendation.", "Other systems have focused on different aspects of users, e.g., the expertise level of the user in a specific domain (Chin, 1986; Sleeman, 1985; Paris, 1987; Hovy, 1987), the user's intent and plan (Allen and Perrault, 1980; Carberry, 1983; Litman, 1986; Moore and Paris, 1992), and the user's personality (Mairesse and Walker, 2006; DeVault et al., 2014; Fung et al., 2016; Fang et al., 2017).", "User modeling has also been employed for personalized topic suggestion in recent Alexa Prize socialbots, using a pre-defined mapping between personality types and topics (Fang et al., 2017), or a conditional random field sequence model with hand-crafted user and context features (Ahmadvand et al., 2018).", "Modeling speakers with continuous embeddings for neural conversation models is studied in (Li et al., 2016), where the model directly learns a dictionary of speaker embeddings.", "Our unsupervised dynamic speaker model differs from previous work in that we build speaker embeddings as a weighted combination of latent modes with weights computed based on the utterance.", "Thus, the model can construct embeddings for any new users and dynamically update the embeddings as the conversation evolves.", "Speaker language variances have been analyzed by previous work and incorporated in NLP models.", "Preotiuc-Pietro et al. (2016) and Jo-hannsen et al. (2015) find that speaker-level language variance affects lexical choices and even syntactic structure based on psycholinguistic hypotheses.", "Speaker demographics are used to improve both low-level tasks such as part-of-speech tagging (Hovy and Sgaard, 2015) and high-level applications such as sentiment analysis (Volkova et al., 2013) and machine translation (Mirkin et al., 2015).", "Lynn et al. (2017) introduce a continuous adaptation method to include user age, gender, personality traits and language features for personalizing several supervised NLP models.", "Different from previous work, we study the use of speaker embeddings learned from utterances in an unsupervised fashion and analyze the possible interpretability of the latent modes.", "In this paper, we address the problem of modeling speakers from their language using an unsupervised approach.", "A dynamic speaker model is proposed to learn speaker embeddings that are updated as the conversation evolves.", "The model achieves promising results on two representative tasks in dialogs: user topic decision prediction in human-socialbot conversations and dialog act classification in human-human conversations.", "In particular, we demonstrate that the model can benefit from unlabelled data in the dialog act classification task, where we achieve the state-of-the-art results.", "Finally, we carry out analysis on the learned latent modes on both tasks, and find cues that suggest the model captures speaker characteristics such as intent, speaking style, and gender.", "For future work, it could be interesting to explore guiding some latent modes with a few examples to pick up specific user features such as personality traits.", "We thank the anonymous reviewers for their helpful feedback.", "We also thank Trang Tran for her feedback on the paper.", "This research is supported by Amazon Alexa Fellowship and Tencent AI Lab Rhino-Bird Gift Fund.", "The conclusions and findings are those of the authors and do not necessarily reflect the views of sponsors." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "objective", "result", "abstain", "other", "other", "other", "other" ]
[ "Advanced pre-trained models for text representation have achieved state-of-the-art performance on various text classification tasks.", "However, the discrepancy between the semantic similarity of texts and labelling standards affects classifiers, i.e. leading to lower performance in cases where classifiers should assign different labels to semantically similar texts.", "To address this problem, we propose a simple multitask learning model that uses negative supervision .", "Specifically, our model encourages texts with different labels to have distinct representations.", "Comprehensive experiments show that our model outperforms the state-of-the-art pre-trained model on both single-and multi-label classifications, sentence and document classifications, and classifications in three different languages.", "Text classification generally consists of two processes: an encoder that converts texts to numerical representations and a classifier that estimates hidden relations between the representations and class labels.", "The text representations are generated using N -gram statistics (Wang and Manning, 2012), word embeddings (Joulin et al., 2017; Wang et al., 2018), convolutional neural networks (Kalch-brenner et al., 2014; Zhang et al., 2015; Shen et al., 2018), and recurrent neural networks (Yang et al., 2016, 2018).", "Recently, powerful pre-trained models for text representations, e.g. Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019), have shown state-of-the-art performance on text classification tasks using only the simple classifier of a fully connected layer.", "However, a problem occurs when a classification task is adversarial to text encoders.", "Encoders aim to represent the meanings of texts; hence, seman-Sentence Label BERT A cold is a legit disease.", "tically similar texts tend to have closer representations.", "Meanwhile, a classifier should distinguish subtle differences that lead to different label assignments, although the texts are semantically similar.", "Table 1 shows an example of classification results using BERT for the MedWeb dataset (Wakamiya et al., 2017).", "This task requires the labelling of a disease contracted by the writer of a text.", "Although both texts in Table 1 refer to the common cold, only the second example implies that the writer had a cold.", "BERT mistakenly labelled both texts as Cold 1 , likely owing to their semantic relatedness.", "When the standard of class label assignments disagrees with the semantic similarity, the classifier tends to be error-prone owing to the excessive effects of the semantic similarity.", "To address this problem, we propose utilizing negative examples, i.e. texts with different labels, to enable negative supervision of the encoder for generating distinct representations for each class.", "In this study, we design a simple multitask learning model that trains two models simultaneously with a shared text encoder.", "The first model learns an ordinary classification task (herein referred to as the main task).", "Meanwhile, the second model encourages representations with different class labels to be distinct (herein referred to as the auxiliary 1 We use the typewriter font to indicate a class label throughout this paper. Encoder I caught a cold. A cold is a legit disease I'm coughing Classifier Discriminator Cold Main Task Auxiliary Task Figure 1: Our model consists of a classifier, discriminator, and shared text encoder. The main task learns classification, while the auxiliary task gives negative supervision to generate distinct representations for sentences with different labels. task).", "We empirically show the effectiveness of our model using the following standard benchmarks of five single-label and four multi-label classification datasets.", "This study has two main contributions.", "Our multi-tasking learning model consistently outperforms the state-of-the-art model in terms of both single and multi-label classifications, sentence and document classifications, and classifications in three languages.", "Figure 1 shows an overview of our multitask learning framework that consists of main and auxiliary tasks.", "Herein, we refer to the model for the main task as a classifier and the model for the auxiliary task as a discriminator.", "The overall loss function L sums the loss of the main task L m and that of the auxiliary task L a : L = L m + L a .", "The classifier and discriminator share and jointly optimize the text encoder, which encodes an input text into a d -dimensional vector v R d .", "In this paper, we use the terms of text and representation interchangeably when the intention is obvious from the context.", "The main task is the primary classification task to optimize.", "We use a simple classifier as employed in BERT.", "The classifier takes an input vector v m and calculates probabilities p R | C | to assign a set of class labels C : p = g ( W v m + b ) , where W R | C | d and b R | C | are parameters of the classifier, in which | | counts the number of elements in a set.", "For g , we employ a softmax function for single-label classification and a sigmoid function for multi-label classification.", "In both cases, L m is a negative log-likelihood of predictions.", "The auxiliary task aims to give negative supervision to encourage distinct representations of texts with different labels.", "The discriminator samples a set of n texts v a 1 , . . . , v an from the same batch as v m , all of which have different labels from v m .", "To encourage these texts to have distinct representations, we designed the loss function L a as L a = 1 n (cid:88) i s mi , s mj = 1 + cossim( v m , v aj ) , where the cossim function computes the cosine similarity between the representations.", "This loss function intuitively encourages the negative examples to have smaller cosine similarities.", "We conducted a comprehensive evaluation to investigate the performance of our model in terms of", "(a) singleand multi-label classifications,", "(b) sentenceand document-level classification, and", "(c) different languages.", "We collected the standard evaluation datasets from heterogeneous sources, as summarised in Table 2.", "2 https://github.com/facebookresearch/SentEval Input Language | C | # of train data # of validation data # of test data MR sentence English 2 6 , 823 1 , 706 2 , 133 CR sentence English 2 2 , 416 604 755 SST-5 sentence English 5 8 , 544 1 , 101 2 , 210 TREC sentence English 6 4 , 361 1 , 090 500 SUBJ sentence English 2 6 , 400 1 , 600 2 , 000 MedWeb sentence Japanese 8 1 , 536 384 640 sentence English 8 1 , 536 384 640 sentence Chinese 8 1 , 536 384 640 arXiv document English 40 38 , 188 9 , 548 11 , 935 Table 2: Statistics on the datasets.", "product reviews.", "SST-5 Multi-class classification of the fine-grained sentiment polarity of movie reviews.", "Labels are Positive , Somewhat Positive , Neutral , Somewhat Negative , and Negative .", "SUBJ Binary classification of subjectivity.", "Because the MR, CR, and SUBJ datasets do not separate validation and test sets, we split 20% of each dataset for testing and 20% of the remainder for validation.", "The evaluation metric for these single-label classification tasks is accuracy.", "We used the NTCIR-13 MedWeb (Wakamiya et al., 2017) and arXiv datasets (Yang et al., 2018) for multi-label classification.", "Because the arXiv dataset released by Yang et al. (2018) removed all line breaks, we created one ourselves.", "We collected abstracts and categories of papers submitted to arXiv from January 1 st, 2019 to June 4 th, 2019 using arXiv API.", "6 3 All question types are in the appendix.", "where y i and y i are one-hot vectors of gold and predicted labels, respectively, and I ( x ) takes 1 when x is true and takes 0 otherwise.", "M is the size of a test set.", "As a text encoder, we employed BERT and a Hierarchical Attention Network (HAN) (Yang et al., 2016) for generating sentence and document representation, respectively.", "For BERT, we used the pre-trained BERT-base 7 ( d = 768 ).", "We implemented the HAN following Yang et al. (2016) who used the bi-directional Gated Recurrent Unit as the encoder with the hidden size of 50 ( d = 50 ).", "The embedding layer of the HAN was initialised using CBOW (Mikolov et al., 2013) embeddings (with dimensions of 200 ), which were trained using negative sampling on the training and development sets of each task.", "For systematic comparison, we investigated the performance of the following models.", "As a baseline, we compared models that conduct only the main task (referred to as Baseline), which corresponds to the fine-tuned BERT-base for sentence classification and the original HAN for document classification.", "Note that this BERT baseline signifi-cantly outperforms previous state-of-the-art methods, which were also compared in the experiment.", "To investigate the effects of negative supervision at 7 https://github.com/google-research/bert MR CR SST-5 TREC SUBJ MedWeb arXiv Ja En Zh SOTA 83 .", "the auxiliary task, we compared our model to one that predicts a sentence with the same label.", "Accurately, this model conducts classification given cosine similarities using cross entropy loss (referred to as ACE (the auxiliary task with cross entropy loss)).", "Furthermore, we evaluated two variations of our model.", "The first purely gives negative supervision, i.e. , the auxiliary task only encourages the generation of distinct representation to negative examples, as described in Section 2.2 (referred to as AAN (the auxiliary task using all negative examples)).", "The second uses the following margin-based loss as L a with a positive example as well as negative examples: L a = max 0 , s mk + 1 n 1 (cid:88) i (cid:54) = k s mi , where the k -th sample is selected to have the same label as the input v m to the main task and is the margin empirically set to 0 .", "4 (referred to as AM (the auxiliary task with the margin-based loss)).", "The intuition is that texts with the same label should have more similar representations than negative examples.", "We set the batch size of the main task to 16 and set n to four in the auxiliary task, which performed best on the validation set of the MR task.", "We used early stopping to cease training when the validation score did not improve for 10 epochs.", "The optimization algorithm used was Adam (Kingma and Ba, 2015) with 1 = 0 .", "999 and 2 = 0 .", "9 .", "For each task, we selected the best learning rate among 1e 5 , 3e 5 , and 5e 5 using the validation set.", "To alleviate randomness owing to initialization, we reported average scores of 10 time trials excluding the best and worst results.", "Table 3 shows the performance of all compared methods as well as the performance of the previous state-of-the-art methods (referred to as SOTA).", "The results in Table 3 indicate that our models of AM and AAN consistently outperform the strong Baselines on both single-label and multi-label classifications, sentence and document classifications, and classifications in different languages.", "Most notably, our models are effective even for multi-label classification, which is more challenging than its single-label counterpart.", "In general, AAN achieved greater performance than AM.", "However, their effectiveness turned out to be task-dependent.", "Unlike AM and AAN, ACE degraded the performance of the Baseline except for the MedWeb Japanese task.", "This result shows that simple multitask learning is ineffective and that our design using negative supervision is crucial.", "SST5 is an exception wherein our models degraded the performance of the Baseline.", "We hypothesise that this is because its class labels are gradational, e.g. Somewhat Negative is closer to Negative rather than Positive sentences.", "AM and AAN treat all negative examples equally, disregarding variables, such as relations between class labels.", "Future work should focus on the semantic relations among class labels in the auxiliary task.", "Multitask learning has been employed to improve the performance of text classification (Liu et al.,", "2019; Xiao et al., 2018).", "Previous studies aimed to improve multiple tasks; hence, they required multiple sets of annotated datasets.", "In contrast, our method does not require any extra labelled datasets and is easily applicable to various classification tasks.", "The methods proposed in Arase and Tsujii (2019) and Phang et al. (2018) improved the BERT classification performance by further training the pre-trained model using natural language inference and paraphrase recognition.", "Similar to multitask learning, both methods require an additional large-scale labelled dataset.", "Furthermore, these previous studies revealed that the similarity of tasks in training affects the models' final performance (Xiao et al., 2018; Arase and Tsujii, 2019).", "Our method achieved consistent improvements across tasks, indicating its wider applicability.", "In this paper, we proposed a simple multitask learning model that uses negative supervision to generate distinct representations for texts with different labels.", "Comprehensive evaluation empirically con-firmed that our model consistently outperformed BERT and HAN models on singleand multi-label classifications, sentence and document classifications, and classifications in different languages.", "Our multitask learning model provides a general framework that is easily applicable to existing text classification models.", "In future work, we will examine semantic relations between class labels in the auxiliary task.", "Moreover, we will adapt our model to text generation tasks.", "We expect that our model will encourage a generation model to generate texts with different labels, such as styles, have distinct representations, which will result in class specific expressions.", "This work was supported by JST Number JPMJCR18Y1, Japan.", "Alexis Conneau and Douwe Kiela.", "2018.", "SentEval: An Evaluation Toolkit for Universal Sentence Representations.", "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation , pages 16991704.", "Jeremy Howard and Sebastian Ruder.", "2018.", "Universal language model fine-tuning for text classification.", "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics , pages 328 339.", "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov.", "2017.", "Bag of Tricks for Efficient Text Classification.", "In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics , pages 427431.", "Nal Kalchbrenner, Edward Grefenstette, and Phil Blun-som.", "2014.", "A Convolutional Neural Network for Modelling Sentences.", "In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics , pages 655665.", "Diederik Kingma and Jimmy Ba.", "2015.", "Adam: A Method for Stochastic Optimization.", "In Proceedings of the 3rd International Conference on Learning Representations , pages 115.", "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian-feng Gao.", "2019.", "Multi-task deep neural networks for natural language understanding.", "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 44874496." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "objective", "other", "other", "other", "abstain", "objective", "result", "method", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Recent work has attempted to enhance vector space representations using information from structured semantic resources.", "This process, dubbed retrofitting Faruqui et al. (2015), has yielded improvements in word similarity performance.", "Research has largely focused on the retrofitting algorithm, or on the kind of structured semantic resources used, but little research has explored why some resources perform better than others.", "We conducted a fine-grained analysis of the original retrofitting process, and found that the utility of different lexical resources for retrofitting depends on two factors: the coverage of the resource and the evaluation metric.", "Our assessment suggests that the common practice of using correlation measures to evaluate increases in performance against full word similarity benchmarks", "1) obscures the benefits offered by smaller resources, and", "2) overlooks incremental gains in word similarity performance.", "We propose root-mean-square error (RMSE) as an alternative evaluation metric, and demonstrate that correlation measures and RMSE sometimes yield opposite conclusions concerning the effi-cacy of retrofitting.", "This point is illustrated by word vectors retrofitted with novel treatments of the FrameNet data (Fillmore and Baker, 2010).", "One of the most challenging tasks in the field of Natural Language Processing (NLP) is accurately encoding meaning into a computational system.", "Currently, the predominant approach is to represent the meanings of linguistic units, such as words or phrases, as vectors in a high-dimensional space.", "Vector embeddings are trained over large text corpora using machine-learning techniques, and have proven useful for a wide range of applications, such as named entity recognition (Turian et al., 2010), semantic role labeling (Collobert et al., 2011), sentiment analysis (Socher et al., 2013), and machine translation (Zou et al., 2013).", "Word vectors are typically trained solely on the distributional information from text corpora.", "Recent work has attempted to improve word vectors by infusing them with information from semantic resources in a post-processing step.", "This technique, referred to as retrofitting , was introduced by Faruqui et al. (2015).", "They adjusted pre-trained embeddings based on lexical relations in WordNet (Miller, 1995), FrameNet (Fillmore and Baker, 2010), and the Paraphrase Database (Gan-itkevitch et al., 2013).", "In some cases, this method yielded gains in word similarity performance.", "Retrofitting has been extended in a variety of ways.", "Briefly, these include", "1) adding word-to-word relations to encompass more than just similarity relations, such as by directly introducing antonymy relations (Mrksic et al., 2016), or by explicitly modeling the pairwise relations between items (Lengerich et al., 2017);", "2) increasing the size of the output vocabulary (Speer et al., 2017), or extending the process to affect the word vectors of words outside of the semantic resource (Glavas and Vulic, 2018); and", "3) constructing sense-specific word vectors using a word sense ontology (Jauhar et al., 2015), or word sense information learned from parallel text corpora (Ettinger et al., 2016).", "However, while Faruqui et al. (2015) has certainly spawned a productive line of research into improving pre-trained word vectors, the original study contained a puzzling finding: retrofitting with certain semantic resources actually appeared to harm the quality of the word embeddings.", "This seems counter-intuitive.", "In principle, if semantic resources contain information that is not already captured by the word vectors, then retrofitting should always improve them.", "In order to understand why some semantic resources appear better suited for retrofitting word vectors, we conducted a fine-grained analysis of Faruqui et", "al.'s original technique.", "Given their popularity, we focused on word similarity evaluations.", "We observe that the perceived usefulness of a semantic resource depends on its coverage of the words in the evaluation benchmark.", "Furthermore, we report that the choice of evaluation metric can lead to different conclusions.", "We note that some gains in performance are not captured by correlation measures, and propose that root-mean-square error (RMSE) is more appropriate for measuring changes in word similarity performance.", "The original retrofitting algorithm from Faruqui et al. (2015) is described below.", "The process essentially moves the word vectors of related words closer together.", "A semantic resource can be regarded as a graph which covers a vocabulary V = { w 1 , ..., w n } and denotes relations between them as edges ( w i , w j ) E .", "Given a set of pre-trained distributional vectors (cid:126) W = { (cid:126) w 1 , ..., (cid:126) w d } and a semantic resource with edges E , the goal is to learn a new set of vectors (cid:126)W = { (cid:126)w 1 , ..., (cid:126)w d } .", "Here (cid:126)w i is the word vector corresponding to vocabulary item w i .", "The objective function to be minimized is the following: V (cid:88) w i (cid:16) i || (cid:126)w i (cid:126) w i || 2 + E (cid:88) ( w i ,w j ) ij || (cid:126)w i (cid:126)w j || 2 (cid:17) (1) The first term of the inner sum ensures that the vectors do not stray too far away from their original representations (controlled by ), while the second term compels the vectors to move closer to their neighbors in the semantic resource (con-trolled by ).", "In Faruqui et", "al.'s experiments, all i = 1 , and all ij = 1 degree ( w i ) , where degree ( w i ) refers to the number of neighbors w i had in the resource.", "This is equivalent to specifying that half of the new retrofitted vector will come from the distributional data while the other half will be an average of its neighbors' word vectors.", "They allowed the process to run for 10 iterations.", "We retained these settings in our experiments.", "We employed three semantic resources in our analyses.", "Table 1 shows the number of terms and groupings in each resource after removing terms containing numbers or punctuation.", "1 WordNet .", "WordNet (Miller, 1995) is a large lexical database of English words.", "The resource is composed of synsets, groupings of synonyms.", "Synsets are linked together through a small number of semantic relations.", "We follow Faruqui et al. (2015) and link each word form to its synonyms, hypernyms, and hyponyms ( WN+ ).", "For instance, the word dog is linked to canine (synonym), corgi (hyponym) and domestic animal (hypernym).", "In order to faithfully replicate Faruqui et al., we collapsed part of speech and sense distinctions, meaning that a word form was linked to all of its related words through all of its synsets.", "For instance, dog 's neighbors include corgi through the noun dog (e.g. Sam pet the dog.) and track through the verb to dog (e.g. The task dogged me.) Although the word vectors and evaluations used in this study are insensitive to part of speech and sense distinctions, the number and order of groupings affects the retrofitting procedure.", "In particular, as noted by Speer and Chin (2016), the results depend on the order in which the groupings are iterated over.", "Though we attempted to group words by their synsets, this appeared to lead to poorer performance and we do not report those results here.", "PPDB .", "The paraphrase database (Ganitkevitch et al., 2013) contains millions of English paraphrases automatically extracted from bilingual parallel corpora.", "The core idea is that if a non-English phrase translates to two distinct English strings, then these may be considered paraphrases of each other.", "For instance, since German festgenommen translates to both thrown into jail 1 The number of groupings for PPDB is approximate, taken as the number of unique sets of words in Faruqui et", "al.'s pre-processed lexicon file.", "and imprisoned, the latter two are listed as paraphrases.", "Faruqui et al. (2015) used the XL lexical pack from PPDB 1.0.", "Since this version is no longer publically available, we used their preprocessed file ( PPDB ).", "FrameNet .", "FrameNet (Fillmore and Baker, 2010) is a highly-interconnected lexical database of English containing sense-annotated sentences.", "The basic units of FrameNet are semantic frames, which specify the conceptual structure necessary to understand sets of lexical units (LUs).", "For instance, the frame Attack contains LUs such as at-tack.v , attack.n and offensive.a , which can be understood in light of the frame elements (FEs) Assailant and Victim .", "We performed two experiments with the FrameNet data.", "In the first, we grouped words together if they shared a frame ( FN ).", "Note that this differs from the treatment of WordNet because the frame groupings retain part of speech and sense distinctions.", "Although this method follows Faruqui et al. (2015), we located a bug in their code which led to a loss of about 1/3 of the data: the original code did correctly handle polysemy, which is widespread in FrameNet.", "For our second experiment, we grouped words together based on the FEs that they filled ( FN-ANNO ).", "All of the FrameNet FEs were used in this task (i.e. both core and non-core FEs).", "Since FEs are defined with respect to their frames, each semantic role is frame-specific.", "The rationale is that words which can occupy the same semantic role should be more similar.", "We created groupings from the last nouns which appeared in the FE fillers in the annotation data.", "To illustrate, since the annotation data linked to the FE Assailant of the Attack frame included the nouns enemy , troop , terrorist and forces , their corresponding word vectors were moved closer together.", "Note that all of our retrofitting analyses ignored the frequency of a word's neighbor: even if enemy filled the FE Assailant 100 times, its effect on its neighbors would be identical to if it had only filled the FE once.", "We recognize that the last noun heuristic is simplistic.", "However, we estimate that around 73% of the syntactic heads of FE fillers are nouns.", "Of these, 68% contain only one noun, and 18% contain only two nouns.", "Taken together, this implies that a more sophisticated approach is unlikely to alter the results.", "In addition to the last noun heuristic, we considered grouping the first nouns in the FE fillers, all of the nouns in the FE fillers, and the nouns from FE fillers which contained only one noun.", "All of these experiments yielded similar results, so we only report the last noun condition here.", "Nouns were identified using the default NLTK (Bird and Loper, 2004) English part-of-speech tagger.", "word vector embeddings.", "SG.", "word2vec (Mikolov et al., 2013) is widely-used to learn vector representations from distributional information.", "In the continuous skip-gram architecture (SG), the target word is fed into a log-linear classifier to predict surrounding words within a given context window.", "The available vectors were trained on about 100 billion words from the Google News dataset.", "GloVe.", "Global Vectors for Word Representation (Pennington et al., 2014) is a global log-bilinear regression model which captures both global and local word co-occurrence statistics.", "We use the 300 dimension vectors trained on 6 billion words from Wikipedia and the English Gigaword corpus.", "Word similarity judgments are the most widely-used method of intrinsic evaluation.", "We chose four commonly used word similarity datasets comprised of nouns, verbs and adjectives.", "MEN3K (Bruni et al., 2012) contains 3,000 pairs of words from a set of labels for an image database.", "Interestingly, although Bruni et al. claim that their dataset contains 3,000 pairs of randomly selected words that occur [as labels], it only contains 751 unique words.", "2 Therefore, as an additional evaluation of high-frequency words, we included MTURK-771 (Halawi et al., 2012), a crowd-sourced dataset of 771 word-pairs consisting of 1,113 unique words which we will refer to as MT771 .", "The Stanford Rare Words ( RW ) dataset (Luong et al., 2013) is comprised of 2,034 word-pairs formed from 2,951 unique words.", "SL999 (Hill et al., 2015) explicitly quan-tifies semantic similarity between pairs of words.", "The dataset contains 999 word pairs from 1,028 unique words.", "The word pairs in SL999 were cho-sen to cover the full range of concreteness within each part of speech category.", "We included RW 2 By our calculations, the expected number of unique words obtained from 3,000 random pairs drawn from 20,515 labels (the number in their image database) is around 5,200.", "and SL999 to examine whether the results of our analyses would differ for benchmarks containing common vs. rare words and for those capturing association and relatedness vs. similarity only.", "The standard approach to evaluate the performance of word vectors on word similarity judgments is to compute the cosine similarity values between each pair of words in the dataset and then calculate the correlation between these values and the similarity scores collected from human raters.", "A similar technique is used to assess the utility of different semantic resources in retrofitting word vectors: increases in correlation are taken to be indicative that information from the resource has been successfully injected into the word vectors.", "For both types of evaluations, Spearman correlation has become the preferred correlation measure.", "However, there are several reasons that this method may be misleading.", "The first concerns the issue of the relative coverage of each resource.", "Simply put, not every resource contains all of the words in the evaluation dataset.", "If a resource lacks the words for a particular similarity judgment, then the predicted score will be the same for both the baseline and retrofitted vectors.", "This may have important consequences on the evaluation metric: the fixed scores can throw off the global ranking of the predicted scores, which is measured by the Spearman correlation.", "For every word pair in a word similarity dataset, a resource can contain", "1) both words,", "2) one of the words, or", "3) neither of the words.", "If the goal of the evaluation is to determine whether the knowledge of particular semantic resources can be added to word vectors, then it seems reasonable to only evaluate the resource on the word pairs it covers.", "In this case, the resource will either group the two words together or place them in separate groups, which can be interpreted as explicitly indicating whether the two words are semantically related or not.", "Conversely, it is obvious that retrofitting will not improve the vectors for the word pairs for which neither word is in the semantic resource.", "The situation where only one word is present is more complicated.", "For example, imagine that a resource contained the word view but not the word skyline .", "Following retrofitting the vector for view will move while the vector for skyline will stay the same.", "The relationship between view and skyline will either become more accurate or less accurate, but this change does not directly stem from the semantic resource.", "If the goal of the retrofitting evaluation is to assess the usefulness of particular semantic resources, then including these kinds of word pairs is misleading, since the observed changes are incidental and do not reflect the semantic groupings in the resource.", "In our analyses, all pairs shows the performance of the word vectors using all of the word similarity judgments, and pairs in resource shows their performance using only the subset comprised of judgments for which both words were contained in the semantic resource.", "Our more radical proposal is to consider an entirely different evaluation metric altogether.", "Measures of correlation indicate how well word vectors are able to predict the similarity judgments.", "Spearman correlation specifically measures how well word vectors are able to predict the correct rankings of similarity judgments.", "For example, according to the MEN3K dataset, brick and construction should be ranked as less similar than town and village .", "Another conceivable way to test the word vectors ability to capture word similarity knowledge would be to directly compare the word vectors' predicted score with the human score.", "According to MEN3K, the average rated similarity for town and village was 43 out of 50.", "Taken literally, after normalizing the original scores the cosine similarity should be exactly 0.86.", "We operationalized this by evaluating word vectors using root-mean-square error (RMSE).", "This approach seems particularly appealing for measuring the effects of retrofitting because each similarity judgment contributes independently to the RMSE score.", "One may wonder whether Pearson correlation, which measures linear association, might serve as a better comparison to RMSE.", "To address this concern, we employed the harmonic mean of the Pearson and Spearman correlations as our correlation measure.", "This blends the linear measure (Pearson) with the standardly-employed measure (Spearman).", "However, we note that the resulting baseline and retrofitted scores were very similar across correlation measures, and so our conclusions regarding the choice of evaluation metric were unaffected by this decision.", "In the analysis that follows, we considered the effect of resource coverage and evaluation metric NB GloVe SG MT771 0.80 / 0.35 0.65 / 0.36 0.66 / 0.39 MEN3K 0.85 / 0.28 0.75 / 0.27 0.78 / 0.27 RW 0.54 / 0.37 0.35 / 0.55 0.45 / 0.45 SL999 0.66 / 0.21 0.38 / 0.26 0.45 / 0.25 Table 2: Baseline word vector similarity performance.", "on the results of retrofitting.", "There were four conditions:", "1) Correlation, all word pairs in the benchmark,", "2) Correlation, only those pairs in which both words were in resource,", "3) RMSE, all word pairs in the benchmark, and", "4) RMSE, only those pairs in which both words were in resource.", "If one of the words in a word pair was missing from the word vectors, then it was assigned a predicted cosine similarity of zero.", "(This only occurred with the RW dataset, and was limited to the all pairs conditions.) 4 Results Table 2 shows the baseline word similarity performance according to the harmonic mean of the correlation measures and RMSE.", "As a reference, we include the NumberBatch (NB) vectors, which recently demonstrated state-of-the-art word similarity performance (Speer et al., 2017).", "Correlation and RMSE give similar baseline results among the vector sets and their ability to predict the four similarity benchmarks: NB performs the best.", "The exception is that SG scores a slightly better RMSE score on the MEN3K dataset.", "Figure 1 shows the measured improvements in correlation due to retrofitting.", "This mirrors Faruqui et al. (2015)'s original finding that the PPDB offers the most improvements, and that grouping words by FrameNet frames ( FN ) usually leads to worse performance.", "Note that this finding is observed after correcting for the issue from Faruqui et al. which omitted data from FrameNet.", "This plot also suggests that using FrameNet frame elements ( FN-ANNO ) to group words is very detrimental to word vectors.", "As shown in Figure 2, simply switching the evaluation metric to RMSE paints a much different picture.", "(Since RMSE measures error rather than improvement, the y-axis has been inverted so that improvement is still in the upward direc-tion.)", "The most obvious difference is that according to RMSE all of the semantic resources appear to help.", "Compared to Figure 1, there is a noticeable boost in performance for WN+ , especially when evaluated against RW.", "Remarkably, FN-ANNO almost completely flips polarity.", "The result is especially dramatic against the evaluation sets containing common words (i.e. MT771 and MEN3K): FN-ANNO goes from being the worst-performing resource to one of the best-performing resources.", "Figure 3 shows the measured improvements in correlation when considering only the word pairs in which both words were present in resources.", "The ranked order of the semantic resources is virtually the same.", "Note, however, that the measured performance of the relatively low-coverage resource, FrameNet ( FN ), has jumped considerably: in the RW with GloVe condition, it overtakes PPDB as the resource providing the best improvement.", "Figure 4 measures the change of RMSE for the word pairs covered by the resources.", "FrameNet ( FN ) appears to yield a substantial gain in performance for the subset of the similarity judgments that it covers, and again emerges as the highest-performance resource when evaluated against RW.", "A direct comparison of the all pairs to pairs in resource figures shows that the scores of the other resources change very little.", "The difference is attenuated because these resources are much larger and therefore cover most of the words in the similarity datasets.", "We interpret the jumps in performance from the all pairs to pairs in resource condition as evidence that evaluating a resource on word pairs containing a mixture of words within and outside of its vocabulary may obscure its benefits.", "Of course, low coverage is problematic if the goal is to improve word vectors on a large number of word judgments.", "The pairs in resource assessment is particularly antithetical to the spirit of RW, which is often employed to assess word vector coverage, and we admit that FN only contains 6.3% of the RW word pairs.", "However, we Figure 1: Change in correlation after retrofitting, considering all word pairs Figure 2: Change in Root-mean-square error after retrofitting, considering all word pairs Figure 3: Change in correlation after retrofitting, considering only the word pairs in each resource Figure 4: Change in Root-mean-square error after retrofitting, considering only the word pairs in each resource would argue that there is an important difference between concluding that a semantic resource does not yield gains in retrofitting vs. concluding that the resource improves the quality of the vectors it covers.", "We note that our four conditions yield similar conclusions according to the SL999 evaluation set.", "PPDB and WN+ consistently offer strong improvements, in contrast to FN and FN-ANNO .", "This is not surprising, and follows from the design principles underlying each resource: while PPDB and WordNet specifically group synonyms, FrameNet groups words which evoke the same semantic frame.", "In particular, some frames intentionally contain antonyms.", "As discussed above, the FrameNet groupings still appear useful in improving against MT771, MEN3K and RW, which have been argued to conflate association and similarity (Hill et al., 2015).", "Our most striking finding is that correlation measures and RMSE occasionally yield opposite conclusions regarding the utility of semantic resources.", "How can the retrofitted data simultaneously show a drop in correlation and a gain in RMSE?", "To examine this further, we plotted the effects of retrofitting GloVe with FN-ANNO against the MT771 benchmark (Figure 5).", "Vector cosine similarity (x-axis) is plotted against the human similarity judgments (y-axis).", "The left and right panels compare the vector performance before and after retrofitting.", "Each point represents a single word pair in the MT771 dataset.", "The dashed line corresponds to a model which perfectly predicts the gold standard.", "Points are color-coded with respect to this line: green points mark word pairs whose computed cosine similarity moved closer to the human judgments, while red points indicate word pairs who moved in the opposite direction.", "A small number of blue points indicate predictions which were unaffected by retrofitting because the word pairs were not present in the resource.", "The color-coding in Figure 5 helps illustrate how both Spearman correlation (a measure of goodness) and RMSE (a measure of error) decrease.", "Most of the points are green, which means that from the perspective of individual word pairs, the predictions from the retrofitted vectors are more in line with the gold standard.", "This is directly reflected in RMSE.", "However, while most of the mass moves closer to the dashed line, retrofitting increases the scatter of the points, resulting in a worse association between the vector cosine similarity human similarity judgments.", "Three points are labeled in Figure 5 to show the effect of retrofitting on individual word similarity predictions.", "The diamond marks the word pair find & occurrence , which yields the most improvement according to MT771, with its absolute residual (i.e. distance from the human judgment) dropping 0.25.", "In comparison, the worst-performing word pair is occasion & second , marked with an X, whose residual increases by 0.14.", "This point is part of a noticeable band of red points located near the dotted line.", "Interestingly, for these points the predicted scores for the baseline word vectors were nearly correct, and retrofitting pushed them to overpredict similarity.", "The square marks film & movie , whose residual drops an almost imperceptible 0.003.", "The reason that retrofitting may lead to a worse correlation but a better RMSE score stems from how these measures are computed from the data.", "Each word pair contributes independently to the RMSE score.", "Whether a word pair improves or decreases in performance, it is simply tallied onto the running RMSE score.", "In this case, it is irrelevant whether retrofitting leads to a large increase in scatter.", "In contrast, correlation measures are anchored to the sample means of the two variables.", "After retrofitting, there may be an increase in the scatter in the predicted cosine similarity values.", "Since on average the word pairs will be further away from the sample mean, there will be a drop in correlation.", "Put another way, a word pair's contribution to the correlation score depends on the positions of the all of the other word pairs.", "The particularly large drop in correlation for FN-ANNO likely stems from the unusual heterogeneity of its groupings.", "For example, the word film occurs in the annotation data of 108 distinct FEs in FrameNet, and is grouped with dozens of varied words, such as book , movie , but also DNA and meeting .", "Each of the 108 retrofitting adjustments introduces some scatter.", "In contrast, the neighbors in other resources can be straightforwardly interpreted as related words, and each word will appear in a small number of groupings.", "We note that while it may be instructive to track the performance of individual word pairs, it is dif-Figure 5: Effects of retrofitting GloVe by grouping nouns filling the same frame element in the FrameNet annotation data, considering all word pairs.", "Vector-computed similarity is plotted against the MT771 gold standard judgments using the original word vectors (left panel) and the retrofitted vectors (right panel).", "The dashed line illustrates a model which exactly predicts the human judgments.", "Predicted scores which moved closer to that line are colored green, while points which moved away from the line are colored red.", "Blue points represent word pairs which were not present in the resource, and so were unaffected by retrofitting.", "The changes in Spearman correlation and RMSE are shown above the right panel.", "The symbols are discussed in the text.", "ficult to pinpoint the exact source of the change.", "For instance, in Figure 5 the words corresponding to the square (little change) and the X (worse change) are paired together, while the word pair linked to the diamond (best change) are not.", "Faruqui et al. (2015) attributed FrameNet's comparatively poor performance to the fact that it groups words according to abstract concepts, noting that push and grow are in the same frame.", "Such an argument might explain why FrameNet does not yield gains in performance against SL999, which was designed to capture true similarity judgments.", "However, we have shown that conclusions on the other similarity benchmarks rest on the evaluation metric and on the types of word pairs considered.", "In the RMSE and pairs in resource condition, grouping words by FrameNet frames appears at least as useful as PPDB and WordNet.", "Alternatively, FrameNet can be interpreted as a useful resource for retrofitting the vectors of the words it contains as lexical units.", "Our novel treatment of FrameNet groups nouns using its collection of sense-annotated sentences.", "Although all of the frame elements in these sentences were annotated by hand, the words filling the FEs are not, adding a component of randomness.", "Especially with more semantically general frames, frame elements can be realized by a large number of words.", "This contrasts with FrameNet frames, in which the placement of word senses are painstakingly deliberated, and a particular sense can only be put into one frame.", "PropBank (Bonial et al., 2014) is a large semantically-annotated corpus.", "The semantic roles (rolesets) in PropBank are defined with respect to individual verb and noun word senses.", "The types of words that fill these roles are presumably less varied than those that fill the semantically broader FrameNet frame elements.", "Additionally, PropBank is considerably larger than FrameNet.", "Consequently, we might predict that retrofitting word vectors to PropBank would yield stronger gains in word similarity judgment than to the FrameNet annotation data.", "We leave this task for future research.", "Grouping nouns using the FrameNet annotation data led to large drops in correlation against word similarity benchmarks.", "However, these same data yielded large gains in RMSE performance.", "It might be inferred that semantic resources which have a similar stochastic component may result lower correlation.", "The PPDB is automatically generated, introducing a similar element of randomness, but this is curtailed by its conservative criteria: paraphrases must be attested as translation equivalents.", "BabelNet (Navigli and Ponzetto, 2012) and ConceptNet (Speer et al., 2017) are knowledge resources derived from a number of collaboratively-constructed sources, such as Wikipedia and Wik-tionary.", "Though their collaborative nature likely makes them less accurate than hand-curated resources such as WordNet, they have potential in improving the quality of word vectors (e.g. Speer and Chin, 2016).", "As we observed with FN-ANNO , RMSE may be a more informative measure of comparison than correlation in future retrofitting experiments involving heterogeneous resources.", "More generally, there does not seem to be a strong theoretical reason to prefer correlation-based measures over residual-based ones.", "Although the current practice is to report the Spear-man's rank correlation coefficient between the vector cosine similarities and human word similarity judgments, for over a decade the standard was to report Pearson product-moment correlation coefficient.", "When Resnik (1995) pioneered the technique of comparing computed measures of similarity with human similarity ratings, he used (Pear-son) correlation as one reasonable way to judge [computational measures of semantic similarity].", "The switch to Spearman correlation appears to have occurred in Gabrilovich and Markovitch (2007), who employed it without comment.", "Agirre et al. (2009) did provide a justification, saying, In our belief Pearson is less informative, as the Pearson correlation suffers much when the scores of two systems are not linearly correlated, something which happens often due to the different nature of the techniques applied.", "Unfortunately, Agirre et al. (2009) mischaracterized the popularity of Spearman correlation by claiming that all researchers have used Spearman in evaluating WordSim-353 dataset (Finkelstein et al., 2002).", "This likely stems from a misinterpretation of Gabrilovich and Markovitch's Table 4, which compares their methodology with earlier studies using Spearman correlation.", "The latter authors apparently recomputed word relatedness with the associated algorithms, as the cited studies report Pearson correlation values.", "Willmott (1981; 1982) specifically argues that Pearson correlation should not be used to evaluate model performance, and that RMSE is supe-rior at comparing observed and simulated data.", "3 However, as far as we know, no previous work has seriously considered evaluating the performance of computed word similarity scores using RMSE.", "Reliance on Spearman correlation may lead to incorrect conclusions regarding the quality of word vectors.", "Retrofitting distributional word vectors using relational information in semantic resources can yield improvements in word similarity performance.", "Our fine-grained analysis of the original retrofitting process shows that", "1) the evaluation metric matters: root-mean-square error (RMSE) is more sensitive to gains in performance than correlation measures; and", "2) coverage matters: improvements offered by resources are highly dependent on their coverage of the evaluation benchmark.", "Future attempts to improve word vectors can only succeed if gains in word vector performance are inspected carefully.", "This research was supported in part by the Defense Threat Reduction Agency (DTRA).", "Disclaimer: The project or effort depicted was or is sponsored by the Department of the Defense, Defense Threat Reduction Agency.", "The content of the information does not necessarily reflect the position or the policy of the federal government, and no official endorsement should be inferred." ]
[ "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Medical imaging plays a significant role in clinical practice of medical diagnosis, where the text reports of the images are essential in understanding them and facilitating later treatments.", "By generating the reports automatically, it is beneficial to help lighten the burden of radiologists and significantly promote clinical automation, which already attracts much attention in applying artificial intelligence to medical domain.", "Previous studies mainly follow the encoder-decoder paradigm and focus on the aspect of text generation, with few studies considering the importance of cross-modal mappings and explicitly exploit such mappings to facilitate radiology report generation.", "In this paper, we propose a cross-modal memory networks (CMN) to enhance the encoder-decoder framework for radiology report generation, where a shared memory is designed to record the alignment between images and texts so as to facilitate the interaction and generation across modalities.", "Experimental results illustrate the effectiveness of our proposed model, where state-of-the-art performance is achieved on two widely used benchmark datasets, i.e., IU X-Ray and MIMIC-CXR.", "Further analyses also prove that our model is able to better align information from radiology images and texts so as to help generating more accurate reports in terms of clinical indicators.", "1 1 Introduction Interpreting radiology images (e.g., chest X-ray) and writing diagnostic reports are essential operations in clinical practice and normally requires considerable manual workload.", "Therefore, radiology report generation, which aims to automatically generate a free-text description based on a radiograph, is highly desired to ease the burden of Corresponding author.", "radiologists while maintaining the quality of health care.", "Recently, substantial progress has been made towards research on automated radiology report generation models (Jing et al., 2018; Li et al., 2018; Johnson et al., 2019; Liu et al., 2019; Jing et al., 2019).", "Most existing studies adopt a conventional encoder-decoder architecture, with convolutional neural networks (CNNs) as the encoder and recurrent (e.g., LSTM/GRU) or non-recurrent networks (e.g., Transformer) as the decoder following the image captioning paradigm (Vinyals et al., 2015; Anderson et al., 2018).", "Although these methods have achieved remarkable performance, they are still restrained in fully employing the information across radiology images and reports, such as the mappings demonstrated in Figure 1 that aligned visual and textual features point to the same content.", "The reason for the restraint comes from both the limitation of annotated correspondences between image and text for supervised learning as well as the lack of good model design to learn the correspondences.", "Unfortunately, few studies 2 are dedicated to solving the restraint.", "Therefore, it is expected to have a better solution to model the alignments across modalities and further improve the generation ability, although promising results are continuously acquired by other approaches (Li et al., 2018; Liu et al., 2019; Jing et al., 2019; Chen et al., 2020).", "2 Along this research track, recently there is only Jing et al. (2018) studying on a multi-task learning framework with a coattention mechanism to explicitly explore information linking particular parts in a radiograph and its corresponding report.", "In this paper, we propose an effective yet simple approach to radiology report generation enhanced by cross-modal memory networks (CMN), which is designed to facilitate the interactions across modalities (i.e., images and texts).", "In detail, we use a memory matrix to store the cross-modal information and use it to perform memory querying and memory responding for the visual and textual features, where for memory querying, we extract the most related memory vectors from the matrix and compute their weights according to the input visual and textual features, and then generate responses by weighting the queried memory vectors.", "Afterwards, the responses corresponding to the input visual and textual features are fed into the encoder and decoder, so as to generate reports enhanced by such explicitly learned cross-modal information.", "Experimental results on two benchmark datasets, IU X-RAY and MIMIC-CXR, confirm the validity and effectiveness of our proposed approach, where state-of-the-art performance is achieved on both datasets.", "Several analyses are also performed to analyze the effects of different factors affecting our model, showing that our model is able to generate reports with meaningful image-text mapping while requiring few extra parameters in doing so.", "We regard radiology report generation as an image-to-text generation task, for which there exist several", "several solutions (Vinyals et al., 2015; Xu et al., 2015; Anderson et al., 2018; Cornia et al., 2019).", "Although images are organized as 2-D format, we follow the standard sequence-to-sequence paradigm for this task as that performed in Chen et al. (2020).", "In detail, the source sequence is X = { x 1 , x 2 , ..., x s , ..., x S } , where x s R d are extracted by visual extractors from a radiology image I and the target sequence are the corresponding report Y = { y 1 , y 2 , ..., y t , ..., y T } , where y t V are the generated tokens, T the length of the report and V the vocabulary of all possible tokens.", "The entire generation process is thus formalized as a recursive application of the chain rule p ( Y | I ) = T (cid:89) t =1 p ( y t | y 1 , ..., y t 1 , I ) (1) The model is then trained to maximize p ( Y | I ) through the negative conditional log-likelihood of Y given the I : = arg max T (cid:88) t =1 log p ( y t | y 1 , ..., y t 1 , I ; ) (2) where is the parameters of the model.", "An overview of the proposed model is demonstrated in Figure 2, with cross-modal memories emphasized.", "The details of our approach are described in following subsections regarding to its three major components, i.e., the visual extractor, the cross-modal memory networks and the encoder-decoder process enhanced by the memory.", "To generate radiology reports, the first step is to extract the visual features from radiology images.", "In our approach, the visual features X of a radiology image I are extracted by pre-trained convolutional neural networks (CNN), such as VGG (Simonyan and Zisserman, 2015) or ResNet (He et al., 2016).", "Normally, an image is decomposed into regions of equal size 3 , i.e., patches, and the features (represen-tations) of them are extracted from the last convolutional layer of CNN.", "Once extracted, the features in our study are expanded into a sequence by concatenating them from each row of the patches on the image.", "The resulted representation sequence is used as the source input for all subsequent modules and the process is formulated as { x 1 , x 2 , ..., x s , ..., x S } = f v ( I ) (3) where f v ( ) refers to the visual extractor.", "To model the alignment between image and text, existing studies tend to map between images and texts directly from their encoded representations (e.g., Jing et al. (2018) used a co-attention to do so).", "However, this process always suffers from the limitation that the representations across modalities are hard to be aligned, so that an intermediate medium is expected to enhance and smooth such mapping.", "To address the limitation, we propose to use CMN to better model the image-text alignment, so as to facilitate the report generation process.", "With using the proposed CMN, the mapping and encoding can be described in the following procedure.", "Given a source sequence { x 1 , x 2 , ..., x S } (features extracted from the visual extractor) from an image, we feed it to this module to obtain the memory responses of the visual features { r x 1 , r x 2 , ..., r x S } .", "Similarly, given a generated sequence { y 1 , y 2 , ..., y t 1 } with its embedding { y 1 , y 2 , ..., y t 1 } , it is also fed to the cross-modal memory networks to output the memory responses of the textual features { r y 1 , r y 2 , ..., r y t 1 } .", "In doing so, the shared information of visual and textual features can be recorded in the memory so that the entire learning process is able to explicitly map between the images and texts.", "Specifically, the cross-modal memory networks employs a matrix to preserve information for encoding and decoding process, where each row of the matrix (i.e., a mem-3 E.g., VGG/ResNet uses region size 32 32 (in pixels).", "ory vector) records particular cross-modal information connecting images and texts.", "We denote the matrix as M = { m 1 , m 2 , ..., m i , ..., m N } , where N represents the number of memory vectors and m i R d the memory vector at row i with d referring to its dimension.", "During the process of report generation, CMN is operated with two main steps, namely, querying and responding, whose details are described as follows.", "4 Memory Querying We apply multi-thread 5 querying to perform this operation, where in each thread the querying process follows the same procedure described as follows.", "In querying memory vectors, the first step is to ensure the input visual and textual features are in the same representation space.", "Therefore, we convert each memory vector in M as well as input features through linear transformation by k i = m i W k (4) q s = x s W q (5) q t = y t W q (6) where W k and W q are trainable weights for the conversion.", "Then we separately extract the most related memory vector to visual and textual features according to their distances D s i and D t i through D s i = q s k (cid:62) i d (7) D t i = q t k (cid:62) i d (8) where the number of extracted memory vectors can be controlled by a hyper-parameter K to regularize how much memory is used.", "We denote the queried memory vectors as { k s 1 , k s 2 , ..., k s j , ..., k s K } and { k t 1 , k t 2 , ..., k t j , ..., k t K } .", "Afterwards, the importance weight of each memory vector with respect to visual and textual features are obtained by normalization over all distances by w s i = exp ( D s i ) K j =1 exp ( D s j ) (9) w t i = exp ( D t i ) K j =1 exp ( D t j ) (10) Note that the above steps are applied in each thread to allow memory querying from different memory representation subspaces.", "4 Note that these two steps are performed in both training and inference stages, where in inference, all textual features are obtained along with the generation process.", "5 Thread number can be arbitrarily set in experiments.", "Memory Responding The responding process is also conducted in a multi-thread manner corresponding to the query process.", "For each thread, we firstly perform a linear transformation on the queried memory vector via v i = m i W v (11) where W v is the trainable weight for m i .", "So that all memory vectors { v s 1 , v s 2 , ..., v s j , ..., v s K } are transferred into { v t 1 , v t 2 , ..., v t j , ..., v t K } .", "Then, we obtain the memory responses for visual and textual features by weighting over the transferred memory vectors by r x s = K i =1 w s i v s i (12) r y t = K i =1 w t i v t i (13) where w s i and w t i are the weights obtained from memory querying.", "Similar to memory querying, we apply memory responding to all the threads so as to obtain responses from different memory representation subspaces.", "Since the quality of input representation plays an important role in model performance (Pennington et al., 2014; Song et al., 2017, 2018; Peters et al., 2018; Song and Shi, 2018; Devlin et al., 2019; Song et al., 2021), the encoder-decoder in our model is built upon standard Transformer (which is a powerful architecture that achieved state-of-the-art in many tasks), where memory responses of visual and textual features are functionalized as the input of the encoder and decoder so as to enhance the generation process.", "In detail, as the first step, the memory responses { r x 1 , r x 2 , ..., r x S } for visual features are fed into the encoder through { z 1 , z 2 , ..., z S } = f e ( r x 1 , r x 2 , ..., r x S ) (14) where f e ( ) represents the encoder.", "Then the resulted intermediate states { z 1 , z 2 , ..., z S } are sent to the decoder at each decoding step, jointly with the memory responses { r y 1 , r y 2 , ..., r y t 1 } for the textual features of generated tokens from previous steps, so as to generate the current output y t by y t = f d ( z 1 , z 2 , ..., z S , r y 1 , r y 2 , ..., r y t 1 ) (15) where f d ( ) refers to the decoder.", "As a result, to generate a complete report, the above process is repeated until the generation is finished.", "We employ two conventional benchmark datasets in our experiments, i.e., IU X-RAY (Demner-Fushman et al., 2016) 6 from Indiana University and MIMIC-CXR (Johnson et al., 2019) 7 from the Beth Israel Deaconess Medical Center.", "The former is a relatively small dataset with 7,470 chest X-ray images and 3,955 corresponding reports; the latter is the largest public radiography dataset with 473,057 chest X-ray images and 206,563 reports.", "Following the experiment settings from previous studies (Li et al., 2018; Jing et al., 2019; Chen et al., 2020), we only generate the findings section and exclude the samples without the findings section for both datasets.", "For IU X-RAY , we use the same split (i.e., 70%/10%/20% for train/validation/test set) as that stated in Li et al. (2018) and for MIMIC-CXR we adopt its official split.", "Table 1 show the statistics of all datasets in terms of the numbers of images, reports, patients and the average length of reports with respect to train/validation/test set.", "To examine our proposed model, we use the following ones as the main baselines in our experiments: BASE : this is the backbone encoder-decoder used in our full model, i.e., a three-layer Transformer model with 8 heads and 512 hidden units without other extensions.", "BASE + MEM : this is the Transformer model with the same architecture of BASE where two memory networks are separately applied to image and text, respectively.", "This baseline aims to provide a reference to the cross-modal memory.", "ing conventional image captioning models, e.g., ST (Vinyals et al., 2015), ATT 2 IN (Rennie et al., 2017), ADAATT (Lu et al., 2017), TOPDOWN (An-derson et al., 2018), and the ones proposed for the medical domain, e.g., COATT (Jing et al., 2018), HRGR (Li et al., 2018), CMAS-RL (Jing et al., 2019) and R2G EN (Chen et al., 2020).", "Following Chen et al. (2020), we evaluate the above models by two types of metrics, conventional natural language generation (NLG) metrics and clinical efficacy (CE) metrics 8 .", "The NLG metrics 9 include BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2011) and ROUGE-L (Lin, 2004).", "For CE metrics, the CheXpert (Irvin et al., 2019) 10 is applied to label the generated reports and compare the results with ground truths in 14 different categories related to thoracic diseases and support devices.", "We use precision, recall and F1 to evaluate model performance for CE metrics.", "To ensure consistency with the experiment settings of previous work (Li et al., 2018; Chen et al., 2020), we use two images of a patient as input for report generation on IU X-RAY and one image for MIMIC-CXR.", "For visual extractor, we adopt the ResNet101 (He et al., 2016) pretrained on Ima-geNet (Deng et al., 2009) to extract patch features with 512 dimensions for each feature.", "For the encoder-decoder backbone, we use a Transformer structure with 3 layers and 8 attention heads, 512 dimensions for hidden states and initialize it randomly.", "For the memory matrix in CMN, its dimen-8 Note that CE metrics only apply to MIMIC-CXR because the labeling schema of CheXpert is designed for MIMIC-CXR, which is different from that of IU X-RAY .", "9 https://github.com/tylin/coco-caption 10 https://github.com/MIT-LCP/mimic-cxr/ tree/master/txt/chexpert sion and the number of memory vectors N are set to 512 and 2048, respectively, and also randomly initialized.", "For memory querying and responding, thread number and the K are set to 8 and 32, respectively.", "We train our model under cross entropy loss with Adam optimizer (Kingma and Ba, 2015).", "The learning rates of the visual extractor and other parameters are set to 5 10 5 and 1 10 4 , respectively, and we decay them by a 0 .", "8 rate per epoch for all datasets.", "For the report generation process, we set the beam size to 3 to balance the effectiveness and efficiency of all models.", "Note that the optimal hyper-parameters mentioned above are obtained by evaluating the models on the validation sets from the two datasets.", "The main experimental results on the two aforementioned datasets are shown in Table 2, where BASE + CMN represents our model (same below).", "There are several observations drawn from different aspects.", "First, both BASE + MEM and BASE + CMN outperform the vanilla Transformer (BASE ) on both datasets with respect to NLG metrics, which confirms the validity of incorporating memory to introduce more knowledge into the Transformer backbone.", "Such knowledge may come from the hidden structures and regularity patterns shared among radiology images and their reports, so that the memory modules are able to explicitly and reasonably model them to promote the recognition of diseases (symptoms) and the generation of reports.", "Second, the comparison between BASE + CMN and two baselines on different metrics confirms the effectiveness of our proposed model with significant improvement.", "Particularly, BASE + CMN outperforms BASE + MEM by a large margin, which indicates the DATAMODELNLG METRICSCE METRICS BL-1 BL-2 BL-3 BL-4 MTR RG-L P R F1 IU X-RAYST 0.216 0.124 0.087 0.066 -0.306 -ATT 2 IN 0.224 0.129 0.089 0.068 -0.308 -ADAATT 0.220 0.127 0.089 0.068 -0.308 -COATT 0.455 0.288 0.205 0.154 -0.369 -HRGR 0.438 0.298 0.208 0.151 -0.322 -CMAS-RL 0.464 0.301 0.210 0.154 -0.362 -R2G EN 0.470 0.304 0.219 0.165 0.187 0.371 -OURS ( CMN ) 0.475 0.309 0.222 0.170 0.191 0.375 -MIMIC -CXR ST (cid:51) 0.299 0.184 0.121 0.084 0.124 0.263 0.249 0.203 0.204 ATT 2 IN (cid:51) 0.325 0.203 0.136 0.096 0.134 0.276 0.322 0.239 0.249 ADAATT (cid:51) 0.299 0.185 0.124 0.088 0.118 0.266 0.268 0.186 0.181 TOPDOWN (cid:51) 0.317 0.195 0.130 0.092 0.128 0.267 0.320 0.231 0.238 R2G EN 0.353 0.218 0.145 0.103 0.142 0.277 0.333 0.273 0.276 OURS ( CMN ) 0.353 0.218 0.148 0.106 0.142 0.278 0.334 0.275 0.278 Table 3: Comparisons of our proposed model with previous studies on the test sets of IU X-RAY and MIMIC-CXR with respect to NLG and CE metrics.", "usefulness of CMN in learning cross-modal features with a shared structure rather than separate ones.", "Third, when comparing between datasets, the performance gains from BASE + CMN over two baselines (i.e., BASE and BASE + MEM ) on MIMIC-CXR are larger than that of IU X-RAY .", "This observation owes to the fact that MIMIC-CXR is relatively larger, which helps the learning of the alignment between images and texts so that CMN helps more on report generation on MIMIC-CXR.", "Third, when compared between datasets, the per-formace gain from BASE + CMN over two baselines (i.e., BASE and BASE + MEM ) on IU X-RAY are larger than that of MIMIC-CXR.", "This observation owes to the fact that IU X-Ray is relatively small and has less complicated visual-textual mappings, thus easier for generation by CMN.", "Moreover, this size effect also helps that our model shows the same trend on the CE metrics on MIMIC-CXR as that for NLG metrics, where it outperforms all its baselines in terms of precision, recall and F1.", "To further demonstrate the effectiveness, we further compare our model with existing models on the same datasets, with their results reported in Table 3 on both NLG and CE metrics.", "We have following observations.", "First, cross-modal memory shows its effectiveness in this task, where our model outperforms COATT , although both of them improve the report generation by the alignment of visual and textual features.", "The reason behind might be that our model is able to use a shared memory matrix as the medium to softly align the visual and textual features instead of direct alignment using the co-attention mechanism, thus unifies cross-modal features within same representation space and facilitate the alignment process.", "Second, our model confirms its superiority of simplicity when comparing with those complicated models.", "For example, HRGR uses manually extracted templates and CMAS-RL utilizes reinforcement learning with a careful design of adaptive rewards and our model achieves better results with a rather simpler method.", "Third, applying memory to both the encoding and decoding can further improve the generation ability of Transformer when compared with R2G EN which only uses memory in decoding.", "This observation complies with our intuition that the cross-modal operation tightens the encoding and decoding so that our model generates higher quality reports.", "Fourth, note that although there are other models (i.e., COATT and HRGR ) with exploiting extra information (such as private datasets for visual extractor pre-training), our model still achieves the state-of-the-art performance without requiring such information.", "It reveals that in this task, the hidden structures among the images and texts and a 32 64 128 256 512 1024 2048 4096 0.088 0.092 0.096 0.100 0.104 0.108 BASE BASE+MEM BASE+CMN Parameter Memory Size BL4 62.8M 63.2M 63.6M 64.0M 64.4M 64.8M P a r a m e t e r s Figure 3: The BLEU-4 score and the number of parameters from BASE + CMN against the memory size (i.e., number of memory vectors) when the model is trained and tested on MIMIC-CXR dataset.", "good solution of exploiting them are more essential in promoting the report generation performance.", "Memory Size To analyze the impacts of memory size, we train our model with different numbers of memory vectors, i.e., N ranges from 32 to 4096, with the results on MIMIC-CXR shown in Figure", "3. It is observed that, first, enlarging memory by the number of vectors results in better overall performance when the entire memory matrix is relatively small ( N 1024 ), which can be explained by that, within a certain memory capacity, larger memory size helps store more cross-modal information; second, when the memory matrix is larger than a threshold, increasing memory vectors is not able to continue promising a better outcome.", "An explanation to this observation may be that, when the matrix is getting to large, the memory vectors can not be fully updated so they do not help the generation process other than being played as noise.", "More interestingly, it is noted that even if we use a rather large memory size (i.e., N = 4096 ), only 3.34% extra parameters are added to the model compared to BASE , which justifies that introducing memory to report generation process through our model can be done with small price.", "Number of Queried Memory Vectors To analyze how querying impacts report generation, we try CMN with different numbers of queried vectors, i.e., K ranges from 1 to 512, and show the results in Figure", "4. It is found that the number of queried vectors should be neither too small nor too big, where enlarging K leads to better results when K 32 and after this threshold the performance 1 2 4 8 16 32 64 128 256 512 0.088 0.092 0.096 0.100 0.104 0.108 BASE BASE+MEM BASE+CMN Queried Vectors BL4 63M 63M 63M 64M 64M P a r a m e t e r s Figure 4: The BLEU-4 score from BASE + CMN when tested on the MIMIC-CXR test set against different numbers of queried memory vectors.", "starts to drop.", "The reason behind might be the overfitting of memory updating since the memory matrix is sparsely updated in each iteration when K is small, i.e., it is hard to be overfit under this scenario, while more queried vectors should cause intensive updating on the matrix and some of the essential vectors are over-updated accordingly.", "As a result, it is interesting to find the optimal number (i.e., 32) of queried vectors and this is a useful guidance to further improve report generation with controlling the querying process.", "Case Study To further qualitatively investigate how our model learns from the alignments between the visual and textual information, we perform a case study on the generated reports from different models regarding to an input chest X-ray image chosen from MIMIC-CXR.", "Figure 5 shows the image with ground-truth report, and different reports with selected mappings from visual (some part of the image) and textual features (some words and phrases), 11 where the mapped areas on the image are highlighted with different colors.", "In general, BASE + CMN is able to generate more accurate descriptions (in terms of better visual-textual mapping) in the report while other baselines are inferior in doing so.", "For instance, normal medical conditions and abnormalities presented in the chest X-ray image are covered by the generated report from BASE + CMN (e.g., severe cardiomegaly , pulmonary edema and pulmonary arteries ) and the related regions on the image are precisely located regarding to the texts, while the areas highlighted on the image from other models are inaccurate.", "11 The representations of the textual features are extracted from the first layer of the decoder.", "To further illustrate how the alignment works between visual and textual features, we perform a t-SNE visualization on the memory vectors linking to an image and its generated report from the MIMIC-CXR test set.", "It is observed that the word lung in the report and the visual feature for the region of lung on the image query similar memory vectors from CMN, where similar observation is also drawn for hemidiaphragms and its corresponding regions on the image.", "This case confirms that memory vector is effective intermediate medium to interact between image and text features.", "In general, the most popular related task to ours is image captioning, a cross-modal task involving natural language processing and computer vision, which aims to describe images in sentences (Vinyals et al., 2015; Xu et al., 2015; Anderson et al., 2018; Wang et al., 2019; Cornia et al., 2019).", "Among these studies, the most related study from Cornia et al. (2019) also proposed to leverage memory matrices to learn a priori knowledge for visual features using memory networks (Weston et al., 2015; Sukhbaatar et al., 2015; Zeng et al., 2018; Santoro et al., 2018; Nie et al., 2020; Diao et al., 2020; Tian et al., 2020b, 2021; Chen et al., 2021), but such operation is only performed during the encoding process.", "Different from this work, the memory in our model is designed to align the visual and textual features, and the memory operations (i.e., querying and responding) are performed in both the encoding and decoding process.", "Recently, many advanced NLP techniques (e.g., pre-trained language models) have been applied to tasks in the medical domain (Pampari et al., 2018; Zhang et al., 2018; Wang et al., 2018; Alsentzer et al., 2019; Tian et al., 2019, 2020a; Wang et al., 2020; Lee et al., 2020; Song et al., 2020).", "Being one of the applications and extensions of image captioning to the medical domain, radiology report generation aims to depicting radiology images with professional reports.", "Existing methods were designed and proposed to better align images and texts or to exploit highly-patternized features of texts.", "For the former studies, Jing et al. (2018) proposed a co-attention mechanism to simultaneously explore visual and semantic information with a multi-task learning framework.", "For the latter studies, Li et al. (2018) introduced a template database to incorporate patternized information and Chen et al. (2020) improved the performance of radiology report generation by applying a memory-driven Transformer to model patternized information.", "Compared to these studies, our model offers an effective yet simple alternative to generating radiology reports, where a soft intermediate layer is provided to facilitate the mappings between visual and textual features, so that more accurate descriptions are produced for generation.", "In this paper, we propose to generate radiology reports with cross-modal memory networks, where a memory matrix is employed to record the alignment and interaction between images and texts, with memory querying and responding performed to obtain the shared information across modalities.", "Experimental results on two benchmark datasets demonstrate the effectiveness of our model, which achieves the state-of-the-art performance.", "Further analyses investigate the effects of hyper-parameters in our model and show that our model is able to better align information from images and texts, so as to generate more accurate reports, especially with the fact that enlarging the memory matrix does not significantly affect the entire model size.", "This work is supported by Chinese Key-Area Research and Development Program of Guangdong Province (2020B0101350001) and NSFC under the project The Essential Algorithms and Technologies for Standardized Analytics of Clinical Texts (12026610)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "result", "method", "abstain", "method", "other", "other", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "abstain", "result", "other", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "method", "objective", "objective", "objective", "other" ]
[ "Speech pre-training has primarily demonstrated efficacy on classification tasks, while its capability of generating novel speech, similar to how GPT-2 can generate coherent paragraphs, has barely been explored.", "Generative Spoken Language Modeling (GSLM) (Lakho-tia et al., 2021) is the only prior work addressing the generative aspects of speech pretraining, which replaces text with discovered phone-like units for language modeling and shows the ability to generate meaningful novel sentences.", "Unfortunately, despite eliminating the need of text, the units used in GSLM discard most of the prosodic information.", "Hence, GSLM fails to leverage prosody for better comprehension, and does not generate expressive speech.", "In this work, we present a prosody-aware generative spoken language model (pGSLM).", "It is composed of a multi-stream transformer language model (MS-TLM) of speech, represented as discovered unit and prosodic feature streams, and an adapted HiFi-GAN model converting MS-TLM outputs to waveforms.", "We devise a series of metrics for prosody modeling and generation, and re-use metrics from GSLM for content modeling.", "Experimental results show that the pGSLM can utilize prosody to improve both prosody and content modeling, and also generate natural, meaningful, and coherent speech given a spoken prompt.", "1 1 Introduction Natural language processing (NLP) has made tremendous progress recently.", "One of the most significant findings is that language models (LMs) are natural unsupervised multitask learners (Radford et al., 2018, 2019; Brown et al., 2020) by simply training a big neural network on next word prediction with a large amount of unlabeled text, it learns 1 Audio samples can be found at https://speechbot.", "to comprehend, answer questions, summarize, and even translate (Radford et al., 2019).", "Fine-tuning such pre-trained models further leads to the state-of-the-art performance on numerous benchmark tasks (Brown et al., 2020), beating tailor-made models trained from scratch only on labeled data.", "Given the impressive performance of pre-trained text language models, it is tempting to approach spoken language processing tasks by first transcribing speech into text with an automatic speech recognition (ASR) system and then utilizing text-based models for comprehension and generation.", "However, there are a number of caveats for such a framework.", "First, the majority of the world's languages are primarily spoken and do not have associated texts in large quantities (Lewis et al., 2016).", "In practice, this limits the reach of NLP techniques to a fraction of the world's languages that have a large presence on the web and for which there exists a widely available high quality ASR system.", "Second, despite sharing the same vocabulary and syntactic rules, the spoken form and the written form of the same language still vary significantly in terms of sentence lengths, word distributions, presence of disfluencies and back-channelings, and so on (Biber, 1991).", "This makes language models pre-trained on web text not suitable for processing spoken languages.", "Third, text does not reflect the rich set of features conveyed by oral languages.", "Speech carries not only phonetic information, but also nonverbal vocalizations (laughter, voice clicks, filler vocalization, etc), rhythm and intonation (prosody), and emotional markers.", "All of these features could help, not only with generating more expressive speech (Ren et al., 2020; ancucki, 2021), but also with the semantic analysis of the content of the message (Cutler et al., 1997; Tran et al., 2017).", "To combat these deficiencies, more recently there is increasing interest in exploring speech pretraining using large quantities of unlabeled speech data (Chung et al., 2019; Schneider et al., 2019; 8666 Kharitonov et al., 2021; Baevski et al., 2020; Hsu et al., 2021c; Liu et al., 2020; Ling and Liu, 2020; Tjandra et al., 2020; Hsu et al., 2021b,a).", "However, most of the studies evaluate their models on discriminative tasks, such as ASR and those in the SUPERB benchmark (Yang et al., 2021).", "To the best of our knowledge, generative spoken language modelling (GSLM) (Lakhotia et al., 2021) is the only prior work that evaluates prompted speech completion, a generative tasks that is similar to the text completion task in GPT-2 (Radford et al., 2019).", "To remove the reliance on text, GSLM exploits discovered units from self-supervised models to build a unit language model (uLM) and a unit-to-spectrogram (u2S) model.", "Speech completion can be achieved by first sampling a unit sequence from the uLM with a unit prompt inferred from a speech prompt, and then synthesizing the sampled sequence into speech with the u2S model.", "Unfortunately, because those discovered units encode mostly phonetic information (Polyak et al., 2021), it suffers from the same prosodic information loss issue as text-based LMs.", "Therefore, when using that uLM for speech completion, it fails to continue with a coherent tone to the prompt.", "In this paper, we introduce a prosody-aware generative spoken language model (pGSLM) that jointly models phonetic content and prosody, in order to leverage prosody for comprehension, and to generate speech coherent with the prompt, which is a precursor for building speech-based dialogue systems.", "In keeping with our aim of liberating NLP from its over-reliance on text, we follow GSLM and represent the phonetic content with self-supervised units discovered from raw audio.", "As for prosody, it is represented by the pattern of quantized fundamental frequency (F0) and duration.", "pGSLM is comprised of two separately trained components: an auto-regressive Multi-Stream Transformer Language Model (MS-TLM) that predicts the next phonetic and prosodic representation given the past ones, and a unit High-Fidelity Generative Adversarial Network (HiFi-GAN) adapted from Polyak et al. (2021) that converts the MS-TLM output into a waveform like a vocoder.", "To evaluate the proposed model, we adopt metrics from (Lakhotia et al., 2021) for content evaluation, and devise a series of metrics for prosody evaluation.", "Experimental results demonstrate that 1) joint modeling of prosody improves phonetic content modeling, 2) pGSLM can generate speech continuation coherent with the prompt in term of the content and the prosody, and 3) proper choices of model and prosodic representation is crucial to synthesizing natural, coherent, and expressive speech.", "Our work is related to utilizing prosody for comprehension and predicting prosody for speech synthesis, which we discuss in the following sections.", "Prosody, which is often characterized by the rhythm, intonation, and intensity of speech, carries useful information for comprehending speech in addition to the textual content (Cutler et al., 1997).", "Prior studies have shown that including prosody information can improve the performance from text-only models on speech segmentation (Shriberg et al., 2000), dialogue act classification (Shriberg et al., 1998; Ward and Tsukahara, 2000), syntactic parsing (Tran et al., 2017), speechlanguage pathology (Cohen et al., 2019), ASR (Ostendorf et al., 2003; Shriberg and Stolcke, 2004), and language modeling (Huang and Renals, 2007; Su and Jelinek, 2008; Ward et al., 2012).", "These studies provide strong empirical evidences for the benefit of considering prosody in processing spoken languages, especially in the conversational scenarios.", "This work shares the same motivation, but differs from the prior work in two crucial aspects.", "First, this work utilizes discrete units discovered from a self-supervised model and hence does not require any textual supervision, making it applicable to both written and unwritten languages, while in the prior work prosody information is used alongside text.", "Second, our model can be regarded as the speech version of GPT, which does not require any task-specific labels and can be pre-trained on large quantities of unlabeled speech data.", "The ability to leverage more data is shown to be the key to achieve good performance in text pre-training.", "The proposed pGSLM model can be re-purposed as a text-to-speech (TTS) model when the phonetic content (represented as a unit sequence) is given and the prosody is generated by the MS-TLM model.", "This is similar to FastSpeech (Ren et al., 2020) and FastPitch (ancucki, 2021) TTS models, where prosodic features are predicted from text and speech are generated conditioning on both 8667 the text and the predicted prosodic features.", "As FastSpeech and FastPitch are designed to improve the inference-time efficiency from auto-regressive models like Tacotron (Wang et al., 2017), they predict prosodic features and spectrograms without introducing dependency between time steps.", "In other words, these models assume that the prosody features within an utterance are not correlated across time steps given the text, whereas our proposed MS-TLM does not make such an assumption.", "We will demonstrate empirically the conditional indepen-dence is not a realistic assumption and our model achieves better performance on prosody metrics with auto-regressive modeling.", "As for analysis on prosody modeling, we present more extensive metrics by considering both teacher-forcing decoding and sampling, while prior work does not consider the multi-modal nature of prosody and only generate prosody deterministically (Ren et al., 2020).", "Moreover, we also evaluate prosody in a more disentangled manner by measuring the error of the prosody prediction module alone instead of measuring the error of the prosody extracted from the synthesized waveform: the latter conflates the impact from both the prosody prediction module and the vocoder.", "In this section, we first describe the phonetic and prosodic representations used in pGSLM, and then introduce the two components it is comprised of: a multi-stream transformer language model and an adapted unit HiFi-GAN.", "We choose units with a vocabulary size of 100 derived from HuBERT (Hsu et al., 2021a), a self-supervised speech model, as the phonetic representation.", "Specifically, these units are obtained through clustering the 6th transformer layer output of the base HuBERT model provided in (Hsu et al., 2021a) using a k-means algorithm, following the recipe of HuBERT closely.", "A speech waveform can therefore be encoded into a sequence of discrete units at a frame rate of 50 units per second, or alternatively, into a sequence of (unit, duration) tuples using run-length encoding.", "HuBERT units were found to perform favorably compared to other self-supervised units such as wav2vec 2.0 (Baevski et al., 2020) and VQ-VAE (van den Oord et al., 2017) in terms of lexical content modeling (Lakho-tia et al., 2021) and disentangling prosodic information (Polyak et al., 2021).", "We use unit duration d and fundamental frequency (F0, or pitch) f to derive prosodic representations.", "Polyak et al. (2021) has shown that pairing HuBERT units with duration and F0 enables high-quality speech re-synthesis that preserves more prosodic information such as intonation compared to re-synthesizing with only units.", "Similar results are demonstrated in several other studies (Ren et al., 2020; ancucki, 2021) in the context of text-to-speech synthesis.", "Unfortunately, while F0 encodes prosodic information, it also encodes significant amount of speaker information.", "Figure A.1 in the appendix illustrates how speaker and prosodic information (emotion) are disentangled in raw pitch using a multi-speaker multi-emotion dataset, EmoV (Adigwe et al., 2018).", "We do not wish to model speaker variation in pGSLM because it is less relevant to spoken language understanding compared to prosody.", "To that end, we propose to model speaker-mean normalized log F0 : lf = log f E f (cid:48) from the same speaker as f [log f (cid:48) ] , which can be interpreted as the ratio to the mean pitch in the log space: lf = log( f/ f ) , where f = exp E f (cid:48) [log f (cid:48) ] .", "Specifically, the equation above is used for voiced frames, and the expectation is taken over voiced frames from a speaker.", "For unvoiced frames, we simply set lf = 0 .", "One may ask why F0 is only normalized by the speaker mean but not the variance.", "We argue that the variance encodes the level of expressiveness and it is desired to preserve it.", "This is demonstrated empirically in Figure A.2 in the appendix, where speakers from expressive datasets, EmoV and Blizzard 2013 (SynSIG), exhibits larger speaker log F0 standard deviation than those in less expressive datasets, LJSpeech (Ito and Johnson, 2017) and VCTK (Veaux et al., 2016).", "On the other hand, we also found that variance is more correlated mean in the linear space than in the log space, as shown in Figure A.3.", "Therefore, we argue that mean-normalized log F0 is a more suitable representation for prosody as it encodes less speaker information while preserving the level of expressiveness.", "We adapt the Transformer LM from (Lakhotia et al., 2021) to take multiple streams of input and predict multiple streams of output, and refer to it as the Multi-Stream Transformer Language Model", "(MS-TLM).", "An MS-TLM predicts a sequence of segment representations, which reduces the sequence length significantly and is found beneficial compared to predicting frame sequences (Lakhotia et al., 2021).", "Each segment is represented with the unit u , duration (in frames) d , and normalized pitch lf .", "The first two are obtained by run-length encoding the fixed frame rate unit sequence, while a segment lf is computed by averaging those from voiced frames within a segment or set to 0 if the entire segment is unvoiced.", "An example is provide in Appendix C. 3.2.1 Delayed prosody prediction Let subscript t be the segment index.", "At each step, a vanilla MS-TLM takes ( u t 1 , d t 1 , lf t 1 ) as input, linearly projects each of them to the dimension of the transformer, and feeds the summed embed-dings to the transformer.", "The transformer output at that step is projected to the dimension of each stream to predict u t , d t , and lf t independently.", "The distribution modeled by the synchronous MS-TLM p ( u 1: T , d 1: T , lf 1: T ) can be written as: (cid:81) Tt =1 p ( u t | u 1: t 1 , d 1: t 1 , lf 1: t 1 ) p ( d t | u 1: t 1 , d 1: t 1 , lf 1: t 1 ) p ( lf t | u 1: t 1 , d 1: t 1 , lf 1: t 1 ) .", "We see that the factorial assumption here may be too strong, because the duration and the pitch of a segment are highly correlated with the phonetic content of the same segment.", "To alleviate that without introducing intra-step dependency or interleaving streams (which increases the sequence length and requires determining an order for the three streams a priori), we introduce a delay fac-tor ( 0 ) for prosodic streams, which shift prosodic input and output streams backward by steps, taking ( u t 1 , d t 1 , lf t 1 ) as input and outputting ( u t , d t , lf t ) .", "When = 1 , each step of the LM predicts the unit of the current segment and the prosodic representations of the previous segment, of which the lexical unit has been observed already, as shown in Figure", "1. 3.2.2 Quantizing prosodic representations A straightforward solution to encode prosody streams d and lf is to represent them as continuous values and minimize an L1 or L2 loss for training, similar to FastSpeech2 (Ren et al., 2020) and FastPitch (ancucki, 2021).", "Doing so assumes that the duration and the pitch of a segment follow Figure 1: Delayed multi-stream transformer language model with prosody stream delay = 1 .", "a unimodal distribution (Laplace for L1 and Gaussian for L2) given the context.", "If the underlying distribution is multimodal with wide spread, the learned distribution would be significantly underfit-ting with a mean far from the modes.", "Empirically, we found that such modeling indeed leads to predicting lf values very close to 0 for all segments, and the generated prosody sounds dull and boring.", "Inspired by WaveNet (Oord et al., 2016), we represent prosodic features as discrete random variables through quantization.", "It is straightforward to quantize d since it encodes integer values originally (length in frames).", "We set the maximum length to be 32 and the bin width to be 1, resulting in 32 bins.", "We quantize speaker-mean normalized log F0 lf into K = 32 bins such that each bin with boundaries [ b i 1 , b i ] contains the same probability mass: P ( lf [ b i 1 , b i ]) = 1 /K .", "The training loss is a weighted sum of three per-stream losses.", "Omitting dependency on the context for brevity, MS-TLM defines a distribution p ( u t , d t , lf t ) of the potential values for a timestep t .", "Then, denoting ground-truth per-channel values as u t , d t , lf t , we get: L ( p ( u t , d t , lf t ) , u t , d t , lf t ) = L u ( p ( u t ) , u t ) + L d ( p ( d t ) , d t ) + L lf ( p ( lf t ) , lf t ) (2) In all experiments, we use cross-entropy as the loss on the predictions of the unit channel ( L u ).", "Whenever we operate on quantized prosody values (both duration and F0), we also use cross-entropy as losses L d and L lf .", "In the case of continuous-valued prosody streams, we treat predicted values p ( d t ) and p ( lf t ) as the mode of Laplacian distributions and maximize the log likelihood of the model, 8669 which is equivalent to minimizing an L1 loss.", "In preliminary experiments, we found that the results are relatively robust to variations of the relative weights and , hence we fix them = = 0 .", "5 in all our experiments.", "To generate new utterances, potentially conditioned on a prompt, we run autoregressive generation where at each step we sample units, duration, and normalized log F0 values, append them to the context and feed them back.", "In the case of discrete channels (units, also duration/pitch in the case of discrete-valued models), we sample from the corresponding multinomial distribution.", "As commonly done in language modelling (Lakhotia et al., 2021), we perform sampling with temperature by scaling the logits by the temperature parameter.", "We fine-tune the temperature on the validation data.", "For MS-TLM that models normalized log F0 as continuous variables, we draw samples from a Laplacian distribution with its location parameter set to the predicted value, because the model assumes the output distribution is Laplacian (see -3.2.3).", "For duration, to avoid sampling invalid values, we sample from a Laplacian distribution truncated at zero and round it to the nearest positive integer.", "Given ( u 1: T , d 1: T , lf 1: T ) generated from the MS-TLM, we adapt the discrete unit-based HiFi-GAN vocoder from (Polyak et al., 2021) to generate waveform.", "The original vocoder proposed in (Polyak et al., 2021) takes in frame-level discrete unit, pitch and speaker embedding as input and applies VQ-VAE quantization on the pitch.", "As MS-TLM predicts quantized speaker-mean normalized log F0 on the segment level, we modify the training of the vocoder so that it takes frame-level segment-average pitch as input, where the pitch values for frames within a segment are set to the same value.", "We apply the same quantization described in 3.2.2 instead of VQ-VAE on the pitch.", "The unit Hifi-GAN and the MS-TLM are trained separately.", "In our experiments, we train MS-TLM models on two English datasets: LibriSpeech (Panayotov et al., 2015) and a 6K-hour subset (Rivire and Dupoux, 2020) of Libri-Light (Kahn et al., 2020) which we refer to as LL-6K.", "Both datasets represent audio books and we use LibriSpeech dev-clean and test-clean as validation and test sets.", "As described in Section 3.1, we use HuBERT-based unit representations.", "However, to investigate whether our proposed models can work with other types of units, we also experiment with CPC (Rivire and Dupoux, 2020; Oord et al., 2018) and ground-truth phone representations.", "We experiment with a vocabulary of 100 units when working with Hubert and CPC, following the same protocol and using the same pre-trained models as Lakhotia et al. (2021).", "On the other hand, frame-level phone transcripts are obtained through forced-alignment using the tri6b model from Kaldi's LibriSpeech recipe (Povey et al., 2011).", "The positionand context-independent phones without lexical stress markers are used, which include 41 units (39 phones, one silence SIL , and one spoken noise SPN ).", "The frame rate of CPC and phone units is 100Hz, and is 50Hz for HuBERT units.", "We experiment with MS-TLM of two sizes: base and large.", "The base one has 6 layers, 8 attention heads per layer, embedding size of 512.", "Its FFN layer has 2048 units.", "The large variant has 12 layers, each with 16 heads, embedding size of 1024 and the FFN layer is of dimensionality 4096.", "We set attention dropout and dropout probabilities to 0.1 for both alternatives.", "On top of that, we apply sequence-level and span-level (Baevski et al., 2020) input dropout to the two prosody streams.", "Specifically, each stream is zero-ed out with a probability of 0.2, and 2% of the steps are selected as starts, from which 5 steps of that stream is zero-ed out.", "Optimization is done using Adam (Kingma and Ba, 2014) with a peak learning rate of 5e-4.", "Learning rate ramps up linearly for the first 4K updates, and then decays to 0 with an inverse square-root schedule.", "We train the base model for 70 epochs, and large model for 100 epochs.", "Each GPU's batch contains up to 3072 ( u, d, lf ) segments and we used 8 (16) GPUs to train base (large) MS-TLM.", "For each update, we aggregated gradients from 8 batches.", "Our overall goal is to find models that can freely generate meaningful content and consistent as well as diverse prosody.", "In this Section, we define a set of metrics that measure models' performance over each stream individually and combined, in both the teacher-forcing mode and the inference mode.", "A simple way to evaluate models is to measure its loss on hold-out data in a setup where for each step the full ground truth context is provided.", "For the unit stream, we measures Negative Log-Likelihood (NLL), equivalent to cross-entropy.", "For the duration and pitch streams we use Mean Absolute Error (MAE), equivalent to L1 loss.", "When the pitch values are quantized, we de-quantize predictions to the means of the respective buckets.", "We next evaluate the model's ability to complete a stream in isolation.", "Specifically, we provide a 3s prompt for all streams, and then sample auto-regressively the target stream while feeding the ground truth value for the other streams, as depicted in Figure", "2. The prompts are inferred from the utterances in the validation set.", "When prosodic features are quantized, we sample with a temperature { 0 .", "0 , 0 .", "25 , 0 .", "5 , 0 .", "7 , 1 .", "0 , 1 .", "3 } , and when they are continuous, we sample with a scale b { 0 .", "0 , 0 .", "05 , 0 .", "125 , 0 .", "25 , 0 .", "5 , 0 .", "7 , 1 .", "0 , 1 .", "3 } for duration and b 0 .", "01 { 2 6 , 2 5 , , 2 0 } for pitch.", "The temperature/scale is chosen to minimize the Min-MAE for the corresponding stream, which we describe next.", "We chose different sweeping ranges for continuous pitch and duration because they have different inherent standard deviations.", "Correctness (Min-MAE) A prompt might have multiple meaningful continuations in the content space (Lakhotia et al., 2021).", "Similarly, a single sentence can have multiple correct prosodic pro-files.", "To account for that, for each prompt we generate n = 20 samples so that a model has a chance to cover most modes of the underlying distribution, and report the minimal MAE (min-MAE) against the reference among the n samples.", "Consistency (Corr.) To quantify the models' capability to generate consistent prosody, we measure Pearson correlation between the mean values of a stream in the prompt and in the generated continuation.", "Clearly, if the prompt has a distinct tempo or a pitch, a good continuation should reflect this.", "The same setup as the min-MAE metric is used ( n = 20 ) with one exception: we only consider sequences that are at least 6s long.", "Expressiveness (Std.) To measure how expressive the generated prosody is, we calculate the standard deviation of the generated values and expect a good model to exhibit a similar level of that as the ground truth.", "The same setup as in Min-MAE is used.", "Lastly, we evaluate the model's ability to carry out prompted speech completion, where all three streams are sampled given a 3s prompt using the temperature/scale parameter determined from per-stream continuation ( 4.2.2) as illustrated in Figure", "3. We sample the MS-TLM auto-regressively until it emits the EOS unit or reaches the length of the reference.", "The MS-TLM output is synthesized into a waveform using the adapted HiFi-GAN.", "Content (Max-Word-Cont-BLEU2) We re-use the maximum word-level continuation BLEU2 proposed by Lakhotia et al. (2021) to quantify how well a model can complete a prompt in terms of the textual content.", "We transcribe the waveform with an off-the-shelf wav2vec 2.0-based ASR (Baevski 8671 et al., 2020) (same as (Lakhotia et al., 2021)) and compute the BLEU2 score for each of the n = 20 continuations against the reference completion.", "The highest one is used as score for a prompt.", "Human evaluation (MOS, MMOS, PMOS) We ask humans to evaluate three aspects of speech continuation: sound quality, meaningfulness (how natural the text content is considering both grammar and meaning), and prosody (how consistent and natural the intonation and the rhythm is).", "We follow the human evaluation protocol used by Lakhotia et al. (2021) closely, where raters evaluate subjective quality of the recordings using headphones on a scale between 1 to 5 with an increment of 1, the higher the better.", "Only Native English speakers were recruited as raters for all three studies.", "The same 100 prompts as (Lakhotia et al., 2021) from LibriSpeech test-other are used, and each system generates one continuation per prompt.", "Each continuation is evaluated by at least 5 raters for each aspect.", "The CrowdMOS package (Ribeiro et al., 2011) was used for all experiments using the recommended recipes for outlier removal.", "All participants were recruited using the Amazon Mechanical Turk platform.", "The metrics on the three aspects are denoted as MOS, M-MOS, and P-MOS.", "In Table 1 we report teacher-forcing metric calculated on LibriSpeech dev-clean dataset for a diverse set of models.", "In rows 1-8, we report metric values for base MS-TLM models that are trained on LibriSpeech 960h transcribed into HuBERT-100 units.", "In rows 9-12 we consider large MS-TLM models trained on HuBERT transcripts of LL6k.", "Rows 13 & 14 and 15 & 16 contain metric values for models that are trained on LibriSpeech 960h transcribed using CPC and ground-truth phonetic units.", "2 The row 1 corresponds to the prosody-ignorant baseline model of (Lakhotia et al., 2021).", "On comparing two models that only predict units (rows 1 and 5) we see that by simply adding prosodic channels to the input of the model, we obtain considerably lower level of negative log-2 Note: the metric values in this section are only comparable within the same unit type.", "To compare across unit types, one can synthesize the MS-TLM output into waveform and transcribe the speech with an ASR systems to compute metrics in the word or character space.", "likelihood of the units ( u NLL: 1.522 vs. 1.336).", "The same trend persist for the models that predict prosodic channels, too.", "For instance, this holds in the case of the continuous-F0 models (rows 9 & 11: 1.513 vs. 1.421) and, equally for the quantized F0 HuBERT-based models (rows 10 and 12: 1.522 vs. 1.406).", "Moreover, this holds for the CPC-based models (row 13 & row 14) and even for the models trained on phone transcripts (rows 15 & 16).", "Hence we conclude that prosodic input universally improves speech content modelling.", "Our results in Table 1 also allow us to investigate whether shifting prosody streams w.r.t. the unit stream ( > 0 ) is useful.", "On comparing rows 6 & 7 we see that this is indeed the case: at an expense of some increase in u NLL (e.g., 1 . 337 vs. 1 . 441 ) we obtain considerable relative improvement in d 8672 ID Input Output Quant?", "MAE ( 0 . 722 0 . 551 ).", "The trend follows when further increasing .", "We also observe that having prosody in the context is beneficial when modeling prosody itself.", "Indeed, this is the case across all pairs of models (rows 9 & 11, 10 & 12) according to d MAE and lf MAE metrics.", "Moreover, this holds for the types of units that differ from HuBERT (CPC: rows 13 & 14, phonetic units: rows 15 & 16).", "In our next experiment we study how the number of sampled prompt continuation affects prosody accuracy metrics (MAE).", "We report results for the four large models (rows 9-12) in Figure", "4. From these results we observe that models that operate on quantized prosodic streams greatly benefit from sampling multiple candidates.", "In contrast, the two continuous-valued models seem to benefit little if at all (in the case of the F0 stream).", "We hypothesise that this striking difference is due to the ability of the multinomial-sampled trajectories to cover multiple mode of the underlying distribution, while the continuous-valued models produce samples that are averaged to the median of the underlying distribution due to the L1 loss.", "transcripts of LL-6k (they correspond to rows 9-12 in Table 1).", "3 These models differ in whether they have prosodic input or not (rows 11 & 12 vs. 9 & 10) and if the prosodic channels are discretized or not (10 & 12 vs. 9 & 11).", "Firstly, on comparing models with and without prosodic input, we observe that having prosody in input improves the accuracy of the prosody continuation (in terms of MAE).", "This holds for predicting duration (e.g., 0.542 and 0.536 for rows 10 and 12).", "We see a higher relative difference for lf (e.g., 0.096 vs. 0.077, same models).", "Our proposed models are also able to leverage provided prosody input to maintain high consistency of the prosody continuation, as measured by the correlation metrics.", "For example, for the continuous-prosody models the correlation values grows from 0.176 to 0.344 for the duration prediction and from 0.093 to 0.494 for the F0 channel.", "Having prosody input also turns out to be important for the word-level BLEU metric: models 11 and 12 outperform their counterparts without prosody inputs, 9 and 10.", "Next, when contrasting discreteand continuous-prosody models the following picture emerges.", "For 3 Audios samples of speech continuation are included in the supplementary material.", "both duration and F0 channels, discrete models achieve lower min-MAE errors.", "Further, both discrete models generate considerably more diverse F0 values than either of the continuous models (up to 2x higher std).", "Among the models with prosody inputs, the one with discrete prosody get higher variability in the d channel.", "In contrast, the correlation metrics favor the prosody-aware continuous model.", "From the point of view of the word-level BLEU scores, both models are very close with the quantized model (row 12) being slightly ahead.", "We attribute this difference between the models to the ability of discrete-valued MS-TLM to better describe multi-modal distributions, as we saw above in the experiment reported in Figure", "4. Table 3 presents the human evaluation results.", "The model with prosody input and quantized prosody performs significantly better than the rest on MOS and M-MOS, and is on par with the variant with prosody input and continuous prosody on P-MOS.", "Note that when not having the prosody input, the model with quantized prosody performs significantly worse on all metrics, demonstrating the importance of auto-regressive generation for discrete representation.", "To summarize, we conclude that", "(i) including prosody input allows better modelling of speech, and", "(ii) architectures that operate with quantized prosody values, generally, perform better on our introduced metrics.", "In this work, we propose a text-free prosody-aware generative spoken language model, pGSLM, which models textual content and prosodic information explicitly and does not use any text supervision by leveraging self-supervised units.", "Through extensive evaluation on a diverse set of metrics, we demonstrated that prosody not only improves content modeling, but also enables better prompted speech generation that is aware of both the content and the prosody from the prompt for the first time in the literature.", "We conducted a number of ablation studies to validate the effectiveness of model design choices.", "As for broader impacts, this work serves as the foundation for building better conditional speech generation applications where prosody is essential, such as in the conversational scenarios.", "In addition, the proposed model could also serve as a pre-trained model for other classification tasks, such as emotion recognition or syntactic parsing from speech, or as a pre-trained model for generative tasks such as text-to-speech synthesis with more expressive and coherent prosody.", "Finally, the proposed prosody metrics (teacher-forcing duration and pitch MAE, continuation cor-rectness/consistency/expressiveness) may also be used for evaluation of text-to-speech synthesis systems that can produce diverse prosody for a given text input." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "objective", "objective", "method", "other", "other", "other", "other", "objective", "objective", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "abstain", "abstain" ]
[ "Abstract", "We introduce the well-established social scientific concept of social solidarity and its contestation, anti-solidarity, as a new problem setting to supervised machine learning in NLP to assess how European solidarity discourses changed before and after the COVID-19 outbreak was declared a global pandemic.", "To this end, we annotate 2.3k English and German tweets for (anti-)solidarity expressions, utilizing multiple human annotators and two annotation approaches (experts vs. crowds).", "We use these annotations to train a BERT model with multiple data augmentation strategies.", "Our augmented BERT model that combines both expert and crowd annotations outperforms the baseline BERT classifier trained with expert annotations only by over 25 points, from 58% macro-F1 to almost 85%.", "We use this high-quality model to automatically label over 270k tweets between September 2019 and December 2020.", "We then assess the automatically labeled data for how statements related to European (anti-)solidarity discourses developed over time and in relation to one another, before and during the COVID-19 crisis.", "Our results show that solidarity became increasingly salient and contested during the crisis.", "While the number of solidarity tweets remained on a higher level and dominated the discourse in the scrutinized time frame, anti-solidarity tweets initially spiked, then decreased to (al-most) pre-COVID-19 values before rising to a stable higher level until the end of 2020.", "Social solidarity statements and other forms of collective pro-social behavior expressed in online media have been argued to affect public opinion and political mobilization (Fenton, 2008; Margolin and Liao, 2018; Santhanam et al., 2019; Tufekci, 2014).", "The ubiquity of social media enables individuals to feel and relate to real-world problems through solidarity statements expressed online and to act accordingly (Fenton, 2008).", "Social solidarity is a key feature that keeps modern societies integrated, functioning and cohesive.", "It constitutes a moral and normative bond between individuals and society, affecting people's willingness to help others and share own resources beyond immediate rational individually-, groupor class-based interests (Sil-ver, 1994).", "National and international crises intensify the need for social solidarity, as crises diminish the resources available, raise demand for new and additional resources, and/or require readjustment of established collective redistributive patterns, e.g. inclusion of new groups.", "Because principles of inclusion and redistribution are contested in modern societies and related opinions fragmented (Fenton, 2008; Sunstein, 2018), collective expressions of social solidarity online are likely contested.", "Such statements, which we refer to as anti-solidarity, question calls for social solidarity and its framing, i.e. towards whom individuals should show solidarity, and in what ways (Wallaschek, 2019).", "For a long time, social solidarity was considered to be confined to local, national or cultural groups.", "The concept of a European society and European solidarity (Gerhards et al., 2019), a form of solidarity that goes beyond the nation state, is rather new.", "European solidarity gained relevance with the rise and expansion of the European Union (EU) and its legislative and administrative power vis-`a-vis the EU member states since the 1950s (Baglioni et al., 2019; Gerhards et al., 2019; Koos and Seibel, 2019; Lahusen and Grasso, 2018).", "After decades of increasing European integration and institutionalization, the EU entered into a continued succession of deep crises, beginning with the European Financial Crisis in 2010 (Gerhards et al., 2019).", "Experiences of recurring European crises raise concerns regarding the future of European society and its foundation, European solidarity.", "Eurosceptics and right-wing populists claim that social solidarity is, and should be, confined within the nation state, whereas supporters of the European project see European solidarity as a means to overcome the great challenges imposed on EU countries and its citizens today (Gerhards et al., 2019).", "To date, it is an open empirical question how strong and contested social solidarity really is in Europe, and how it has changed since the onset of the COVID-19 pandemic.", "Against this background, we ask whether we can detect changes in the debates on European solidarity before and after the outbreak of COVID-19.", "Our contributions are:", "(i) We provide a novel Twitter corpus annotated for expressions of social solidarity and anti-solidarity.", "Our corpus contains 2.3k human-labeled tweets from two annotation strategies (experts vs. crowds).", "Moreover, we provide over 270k automatically labeled tweets based on an ensemble of BERT classifiers trained on the expert and crowd annotations.", "(ii) We train BERT on crowdand expert annotations using multiple data augmentation and transfer learning approaches, achieving over 25 points improvement over BERT trained on expert annotations alone.", "(iii) We present novel empirical evidence regarding changes in European solidarity debates before and after the outbreak of the COVID-19 pandemic.", "Our findings show that both expressed solidarity and anti-solidarity escalated with the occurrence of incisive political events, such as the onset of the first European lockdowns.", "Social Solidarity in the Social Sciences.", "In the social sciences, social solidarity has always been a key topic of intellectual thought and empirical investigation, dating back to seminal thinkers such as Rousseau and Durkheim (Silver, 1994).", "Whereas earlier empirical research was mostly confined to survey-based (Baglioni et al., 2019; Gerhards et al., 2019; Koos and Seibel, 2019; Lahusen and Grasso, 2018) or qualitative approaches (Franceschelli, 2019; Gomez Garrido et al., 2018; Heimann et al., 2019), computational social science just started tackling concepts as complex as solidarity as part of natural language processing (NLP) approaches (Santhanam et al., 2019).", "In (computational) social science, several studies investigated the European Migration Crisis and/or the Financial Crisis as displayed in media discourses.", "These studies focused on differences in perspectives and narratives between mainstream media and Twitter, using topic models (Nerghes and Lee, 2019), and the coverage and kinds of solidarity addressed in leftist and conservative newspaper media (Wallaschek, 2019, 2020a), as well as relevant actors in discourses on solidarity, using discourse network measures (Wallaschek, 2020b).", "While these studies offer insight into solidarity discourses during crises, they all share a strong focus on mainstream media, which is unlikely to publicly reject solidarity claims (Wallaschek, 2019).", "Social media, in contrast, allows its users to perpetuate, challenge and open new perspectives on mainstream narratives (Nerghes and Lee, 2019).", "A first attempt to study solidarity expressed by social media users during crises has been presented by Santhanam et al. (2019).", "They assessed how emojis are used in tweets expressing solidarity relating to two crises through hashtag-based manual annotation ignoring actual content of the tweetsand utilizing a LSTM network for automatic classification.", "Their approach, while insightful, provides a rather simple operationalization of solidarity, which neglects its contested, consequential and obligatory aspects vis-`a-vis other social groups.", "The current state of social science research on European social solidarity poses a puzzle.", "On the one hand, most survey research paints a rather optimistic view regarding social solidarity in the EU, despite marked cross-national variation (Binner and Scherschel, 2019; Dragolov et al., 2016; Gerhards et al., 2019; Lahusen and Grasso, 2018).", "On the other hand, the rise of political polarization and Eurosceptic political parties (Baker et al., 2020; Nicoli, 2017) suggests that the opinions, orientations and fears of a potentially growing political minority is underrepresented in this research.", "People holding extreme opinions have been found to be reluctant to participate in surveys and adopt their survey-responses to social norms (social desirability bias) (Bazo Vienrich and Creighton, 2017; Heerwegh, 2009; Janus, 2010).", "Research indicates that such minorities may grow in times of crises, with both short-term and long-term effects for public opinion and political trust (Gangl and Giustozzi, 2018; Nicoli, 2017).", "Our paper addresses these problems by drawing on large volumes of longitudinal social media data that reflect potential fragmentation of political opinion (Sunstein, 2018) and its change over time.", "Our approach will thus uncover how contested European solidarity is and how it developed since the onset of COVID-19.", "Emotion and Sentiment Classification in NLP.", "In NLP, annotating and classifying text (in social media) for sentiment or emotions is a well-established task (Demszky et al., 2020; Ding et al., 2020; Haider et al., 2020; Hutto and Gilbert, 2014; Oberlander and Klinger, 2018).", "Importantly, our approach focuses on expressions of (anti-)solidarity: For example, texts containing a positive sentiment towards persons, groups or organizations which are at their core anti-European, nationalistic and excluding reflect anti-solidarity and are annotated as such.", "Our annotations therefore go beyond superficial assessment of sentiment.", "In fact, the correlation between sentiment labelse.g., as obtained from Vader (Hutto and Gilbert, 2014)and our annotations in 3 is only 0.2.", "Specifically, many tweets labeled as solidarity use negatively connoted emotion words.", "We use the unforeseen onset of the COVID-19 crisis, beginning with the first European lockdown, enacted late February to early March 2020, to analyze and compare social solidarity data before and during the COVID-19 crisis as if it were a natural experiment (Creighton et al., 2015; Kuntz et al., 2017).", "In order to utilize this strategy and keep the baseline solidarity debate comparable before and after the onset of the COVID-19 crisis, we confined our sample to tweets with hashtags predominantly relating to two previous European crises whose effects continue to concern Europe, its member states and citizens:", "(i) Migration and the distribution of refugees among European member states, and", "(ii) Financial solidarity, i.e. financial support for indebted EU countries.", "The former solidarity debate predominantly refers to the Refugee Crisis since 2015 and the living situation of migrants, the latter mostly relates to the Financial Crisis, followed by the Euro Crisis, and concerns the excessive indebtedness of some EU countries since 2010.", "1 1 Further analyses (not shown) revealed that around 20 percent of the tweets in our sample relate to solidarity regarding other issues.", "Data.", "We crawled 271,930 tweets between 01.09.2019 and 31.12.2020, written in English or German and geographically restricted to Europe, to obtain setups comparable to the survey-based social science literature on European solidarity.", "We only crawled tweets that contained specific hashtags, to filter for our two topics, i.e. refugee and financial solidarity.", "We started with an initial list of hashtags (e.g., #refugeecrisis, #eurobonds), which we then expanded via co-occurrence statistics.", "We manually evaluated 456 co-occurring hashtags with at least 100 occurrences to see if they represented the topics we are interested in.", "Ultimately, we selected 45 hashtags (see appendix) to capture a wide range of the discourse on migration and financial solidarity.", "Importantly, we keep the hashtag list associated with our 270k tweets constant over time.", "2 Definition of Social Solidarity.", "In line with social scientific concepts of social solidarity, we define social solidarity as expressed and/or called for in online media as the preparedness to share one's own resources with others, be that directly by donating money or time in support of others or indirectly by supporting the state to reallocate and redistribute some of the funds gathered through taxes or con-tributions (Lahusen and Grasso, 2018, p. 4).", "We define anti-solidarity as expressions that contest this type of social solidarity and/or deny solidarity towards vulnerable social groups and other European states, e.g. by promoting nationalism or the closure of national borders (Burgoon and Rooduijn, 2021; Cinalli et al., 2020; Finseraas, 2008; Wal-laschek, 2017).", "Expert Annotations.", "After crawling and preparing the data, we set up guidelines for annotating tweets.", "Overall, we set four categories to annotate, with solidarity and anti-solidarity being the most important ones.", "A tweet indicating support for people in need, the willingness and/or gratitude towards others to share resources and/or help them is considered expressing solidarity .", "The same applies to tweets criticizing the EU in terms of not doing enough to share resources and/or help socially vulnerable groups as well as advocating for the EU as a solidarity union.", "A tweet is considered to be expressing anti-solidarity statements 2 We follow a purposeful sampling frame, but this necessarily introduces a bias in our data.", "While we took care of including a variety of hashtags, we do not claim to have captured the full extent of discourse concerning the topics migration and financial solidarity.", "if the above-mentioned criteria are reversed, and/or, the tweet contains tendencies of nationalism or advocates for closed borders.", "Not all tweets fit into these classes, thus we introduce two additional categories: ambivalent and not applicable .", "While the ambivalent category refers to tweets that could be interpreted as both expressing solidarity and anti-solidarity statements, the second category is reserved for tweets that do not contain the topic of (anti-)solidarity at all or refer to topics that are not concerned with discourses on refugee or financial solidarity.", "Table 1 contains example tweets for all categories.", "Full guidelines for the annotation of tweets are given in the appendix.", "We divided the annotation process into six working stages (I-VI) to refine our data set and annotation standards over time and strengthen inter-annotator reliability through subsequent discussions among annotators and social science experts.", "Our annotators included four university students majoring in computer science, one computer science faculty member as well as two social science experts (one PhD student and one professor).", "We started the training of seven annotators with a small dataset that they annotated independently and re-fined the guidelines during the annotation process.", "In the training period, which lasted three iterations (I-III), we achieved Cohen's kappa values of 0.51 among seven annotators.", "In working stage IV, two groups of two annotators annotated 339 tweets with hashtags not included before.", "Across the four annotators, Cohen's kappa values of 0.49 were reached.", "In working stages V and VI, one group of two students annotated overall 588 tweets, with a resulting kappa value of 0.79 and 0.77 respectively.", "While the kappa value was low in the first stages, we managed to raise the inter-annotator reliability over time through discussions with the social science experts and extension of the guidelines.", "We also introduced a gold-standard for annotations from stage II onward which served as orientation.", "This was determined by majority voting and discussions among the annotators.", "For cases where a decision on the gold-standard label could not be reached, a social science expert decided on the gold-standard label; some hard cases were left undecided (not included in the dataset).", "The gold-standard additionally served as human reference performance which we compared the model against.", "On average across all stages, our kappa agreement is 0.64 for four and 0.69 for three classes (collapsing ambivalent and not applicable ), while the macro F1-score is 69% for four and 78.5% for three classes.", "However, in the final stages, the agreement is considerably higher: above 80% macro-F1 for four and between 85.4% and 89.7% macro-F1 for three classes.", "Crowd annotations.", "We also conducted a crowd experiment' with students in an introductory course to NLP.", "We provided students with the guidelines and 100 expert annotated tweets as illustrations.", "We trained crowd annotators in three iterations.", "1) They were assigned reading the guidelines and looking at 30 random expert annotations.", "Then they were asked to annotate 20 tweets themselves and self-report their kappa agreement with the experts (we provided the labels separately so that they could further use the 20 tweets to understand the annotation task).", "2) We repeated this with another 30 tweets for annotator training and 20 tweets for annotator testing.", "3) They received 30 expert-annotated tweets for which we did not give them access to expert labels, and 30 entirely novel tweets, that had not been annotated before.", "These 60 final tweets were presented in random order to each student.", "50% of the 30 novel tweets were taken from before September 2020 and the other 50% were taken from after September 2020.", "125 students participated in the annotation task.", "The annotation experiment was part of a bonus the students could achieve for the course (counted 12.5% of the overall bonus for the class).", "Each novel tweet was annotated by up to 3 students (2.7 on average).", "To obtain a unique label for each crowd-annotated tweet, we used the following simple strategy: we either chose the majority label among the three annotators or the annotation of the most reliable annotator in case there was no unique majority label.", "The annotator that had the highest agreement with the expert annotators was taken as most reliable annotator.", "Kappa agreements of students with the experts are shown in Figure 1. The majority of students has a kappa agreement with the gold-standard of between 0.6-0.7 when three classes are taken into account and between 0.5-0.6 for four classes.", "In Table 2, we further show statistics on our annotated datasets: we have 2299 annotated tweets in total, about 60% of which have been annotated by crowd-workers.", "About 50% of all tweets are annotated as solidarity , 20% as anti-solidarity , and 30% as either not-applicable or ambivalent .", "In our annotations, 1196 tweets are English and 1103 are German.", "3 Finally, we note that the distribution of labels for expert and crowd annotations are different, i.e., the crowd annotations cover more solidarity tweets.", "The reason is twofold:", "(a) for the experts, we oversampled hashtags that we believed to be associated more often with anti-solidarity tweets as the initial annotations indicated that these would be in the minority, which we feared to be problematic for the automatic classifiers.", "(b) The time periods in which the tweets for the experts and crowd annotators fall differ.", "We use multilingual BERT (Devlin et al., 2019) / XLM-R (Conneau et al., 2020) to", "3 In our automatically labeled data, the majority of tweets is German.", "We assumed all German tweets to come from within the EU, while the English tweets would be geofiltered more aggressively.", "classify our tweets in a 3-way classification problem ( solidarity , anti-solidarity , other ), not differentiating between the classes ambivalent and non-applicable since our main focus is on the analysis of changes in (anti-)solidarity.", "We use the baseline MBERT model: bert-base-multilingual-cased and the base XLM-R model: xlm-roberta-base.", "We implemented several data augmentation/transfer learning techniques to improve model performance: Oversampling of minority classes : We randomly duplicate (expert and crowd annotated) tweets from minority classes until all classes have the same number of tweets as the majority class solidarity .", "Back-translation : We use the Google Translate API to translate English tweets into a pivot language (we used German), and pivot language tweets back into English (for expert and crowd-annotated tweets).", "Fine-tuning : We fine-tune MBERT / XLM-R with masked language model and next sentence prediction tasks on domain-specific data, i.e., our crawled unlabeled tweets.", "Auto-labeled data : As a form of self-learning, we train 9 different models (including oversampling, back-translation, etc.) on the expert and crowd-annotated data, then apply them to our full dataset (of 270k tweets, see below).", "We only retain tweets where 7 of 9 models agree and select 35k such tweets for each label ( solidarity , anti-solidarity , other ) into an augmented training set, thus increasing training data by 105k auto-labeled tweets.", "Ensembling : We take the majority vote of 15 different models to leverage heterogeneous information.", "The k = 15 models, like the k = 9 models above, were determined as the topk models by their dev set performance.", "We also experimented with re-mapping multilingual BERT and XLM-R (Cao et al., 2020; Zhao et al., 2020a,b) as they have not seen parallel data during training, but found only minor effects in initial experiments.", "In 5.1, we describe our experimental setup.", "In 5.2, we show the classification results of our baseline models on the annotated data and the effects of our various data augmentation and transfer learning strategies.", "In 5.3, we analyze performances of our best-performing models.", "In 5.4, we automatically label our whole dataset of 270k tweets and analyze changes in solidarity over time.", "To examine the effects of various factors, we design several experimental conditions.", "These involve", "(i) using only hashtags for classification, ignoring the actual tweet text,", "(ii) using only text, without the hashtags,", "(iii) combining expert and crowd annotations for training,", "(iv) examining the augmentation and transfer learning strategies,", "(v) ensembling various models using majority voting.", "All models are evaluated on randomly sampled test and dev sets of size 170 each.", "Both dev and test set are taken from the expert annotations.", "We use the dev set for early stopping.", "To make sure our results are not an artefact of unlucky choices of test and dev sets, we report averages of 3 random splits where test and dev set contain 170 instances in each case (for reasons of computational costs, we do so only for selected experimental conditions).", "We report the macro-F1 score to evaluate the performance of different models.", "Hyperparameters of our models can be found in our github.", "The main results are reported in Table 3. Using only hashtags and expert annotated data yields a macro-F1 score of below 50% for MBERT and XLM-R.", "Including the full texts improves this by over 8 points (almost 20 points for XLM-R).", "Adding crowd-annotations yields another substantial boost of more than 6 points for MBERT.", "Removing hashtags in this situation decreases the performance between 5 and 6 points.", "This means that the hashtags indeed contain import information, but the texts are more important than the hashtags: with hashtags only, we observe macro-F1 scores between 42 and 49%, whereas with text only the performances are substantially higher, between 58 and 60%.", "While using hashtags only means less data since not all of our tweets have hashtags, the performance with only hashtags on the test sets stays below 50%, both with 572 and more than 1500 tweets for training.", "Next, we analyze the data augmentation and transfer learning techniques.", "Including auto-labeled data drastically increases the train set, from below 2k instances to over 100k.", "Even though these instances are self-labeled, performance increases by over 13 points to about 78% macro-F1.", "Additionally oversampling or backtranslating the data does not yield further benefits, but pretraining on unlabeled tweets is effective even here and boosts performance to over 78%.", "Combining all strategies yields scores of up to almost 80%.", "Finally, when we consider our ensemble of 15 models, we achieve a best performance of 84.5% macro-F1 on the test set, close to the human macro-F1 agreement for the experts in the last rounds of annotation.", "To sum up, we note:", "(i) adding crowd annotated data clearly helps, despite the crowd annotated data having a different label distribution;", "(ii) including text is important for classification as the classification with hashtags only performs considerably worse;", "(iii) data augmentation (especially self-labeling), combining models and transfer learning strategies has a further clearly positive effect.", "Our most accurate ensemble models perform best for the majority class solidarity with an F1-score of almost 90%, about 10 points better than for anti-solidarity and over 5 points better than for the other class.", "A confusion matrix for this best performing model is shown in Table 4. Here, anti-solidarity is disproportionately misclassified as either solidarity or the other class.", "Table 5 shows selected misclassifications for our ensemble model with performance of about 84.5% macro-F1.", "This reveals that the models sometimes leverage superficial lexical cues (e.g., the German political party AfD' is typically associated with anti-solidarity towards EU and refugees), including hashtags (Remigration'); see Figure 2, where we used LIME (Ribeiro et al., 2016) to highlight words the model pays attention to.", "To further gain insight into the misclassifications, we had one social science expert reannotate all misclassifications.", "From MBERT XLM-R Condition Train size Dev Test Dev Test E, Hashtag only 572 51.7 0.5 49.0 1.1 48.0 0.9 44.0 0.8 E 579 64.2 1.2 57.7 0.4 64.0 63.3 E+C 1959 66.4 0.5 64.0 1.5 65.0 64.8 E+C, No Hashtags 1959 64.0 0.3 58.0 0.5 62.0 60.0 E+C, Hashtag Only 1567 55.8 2.0 49.5 2.1 47.8 42.2 E+C+Auto label 106959 76.4 78.3 77.5 78.4 E+C+Auto label+Oversample 108048 76.4 76.3 77.4 76.9 E+C+Auto label+Backtranslation 108918 76.0 77.1 77.5 78.7 E+C+Auto label+Pretraining 106959 78.4 78.8 78.6 79.0 E+C+ALL 110007 78.8 1.3 78.6 0.8 78.9 79.7 Table 3: Macro-F1 scores (in %) for different conditions.", "the 25 errors that our best model makes in the test set of 170 instances, the expert thinks that 12 times the gold standard is correct, 7 times the model prediction is correct, and in further 6 cases neither the model nor the gold standard are correct.", "This hints at some level of errors in our annotated data; it further supports the conclusion that our model is close to the human upper bound.", "Throughout the period observed in our data, discourses relating to migration were much more frequent than financial solidarity discourses.", "We crawled an average of 2526 tweets per week relating to migration (anti-)solidarity and an average of 174 financial (anti-)solidarity tweets, judging from the associated hashtags.", "We used our best performing model to automatically label all our 270k tweets between September 2019 and December 2020.", "Solidarity tweets were about twice as frequent compared to anti-solidarity tweets, reflecting a polarized discourse in which solidarity statements clearly dominated.", "Figure 3 shows the frequency curves for solidarity , anti-solidarity and other tweets over time in our sample.", "The figure also gives the ratio S/A := # Solidarity tweets # Anti-Solidarity tweets that shows the frequency of solidarity tweets relative to anti-solidarity tweets.", "Values above one indicate that more solidarity than anti-solidarity statements were tweeted that day.", "Figure 3 displays several short-term increases in solidarity statements in our window of observation.", "Further analysis shows that these peaks have been immediate responses to drastic politically relevant events in Europe, which were also prominently covered by mainstream media, i.e. COVID-19-related news, natural disasters, fires, major policy changes.", "We illustrate this in the following.", "On March 11th 2020, the World Health Organization (WHO) declared the COVID-19 outbreak a global pandemic.", "Shortly before and after, European countries started to take a variety of countermeasures, including stay-at-home orders for the general population, private gathering restrictions, and the closure of educational and childcare institutions (ECDC, 2020a).", "With the onset of these interventions, both solidarity and anti-solidarity statements relating to refugees and financial solidarity increased dramatically.", "At its peak at the beginning of March, anti-solidarity statements markedly outnumbered solidarity statements (we recorded 2189 solidarity tweets vs. 2569 anti-solidarity tweets on march 3rd).", "In fact, the period in early March 2020 is the only extended period in our data where anti-Figure 2: Our best-performing model (macro-F1 of 84.5%) predicts anti-solidarity for the current example because of the hashtag #Remigration (according to LIME).", "The tweet, also given as translation in Table 5 (2) below, is overall classified as other in the gold standard, as it may be considered as expressing no determinate stance.", "Here, we hide identity revealing information in the tweet, but our classifier sees it.", "Text Gold Pred.", "(1) You can drink a toast with the AFD misanthropists #seenotrettung #NieMehrCDU S A (2) Why is an open discussion about #Remigration (not) yet possible?", "O A (3) Raped and Beaten, Lesbian #AsylumSeeker Faces #Deportation A O Table 5: Selected misclassifications of best performing ensemble model.", "solidarity statements outweighed solidarity statements.", "The dominance of solidarity statements was reestablished after two weeks.", "Over the following months, anti-solidarity statements decreased again to pre-COVID-19 levels, whereas solidarity statements remained comparatively high, with several peaks between March and September 2020.", "Solidarity and anti-solidarity statements shot up again early-September 2020, with an unprecedented climax on September 9th.", "Introspection of our data shows that the trigger for this was the precarious situation of refugees after a fire destroyed the Moria Refugee Camp on the Greek island of Lesbos on the night of September 8th.", "Human Rights Watch had compared the camp to an open-air prison in which refugees lived under inhumane conditions, and the disaster spurred debates about the responsibilities of EU countries towards refugees and the countries hosting refugee hot spots (i.e. Greece and Italy).", "At that time, COVID-19 infection rates in the EU were increasing but still low, and national measures to prevent the spread of infections relaxed in some and tightened in other EU countries (ECDC, 2020a,b).", "Further analyses (not displayed) show that the dominance of solidarity over anti-solidarity statements at the time was driven by tweets using hashtags relating to migration.", "The contemporaneous discourse on financial solidarity between EU countries was much less pronounced.", "From September 2020 to December 2020, solidarity and (anti-)solidarity statements were about equal in frequency, which means that anti-solidarity was on average on a higher level compared to the earlier time points in our time frame.", "This period also corresponds to the highest COVID-19 infection rates witnessed in the EU, on average, during the year 2020.", "In fact, the Spearman correlation between the number of anti-solidarity tweets in our data and infection rates is 0.45 and 0.47, respectively (infection rates within Germany and the EU); see Figure 4 in the appendix.", "Correlation with the number of solidarity tweets is, in contrast, non-significant.", "Discussion Late February to mid-March 2020, EU governments began enacting lockdowns and other measures to contain COVID-19 infection rates, turning people's everyday lives upside down.", "During this time frame, anti-solidarity statements peaked in our data, but solidarity statements quickly dominated thereafter again.", "During the summer of 2020, anti-solidarity tweets decreased whereas solidarity tweets continued to prevail on higher levels than before.", "A major peak on September 9th, in the aftermath of the destruction of the Moria Refugee Camp, signifies an intensification of the polarized solidarity discourse.", "From September to December 2020, anti-solidarity and solidarity statements were almost equal in number.", "Thus, the onset of the COVID-19 crisis as well as times of high infection rates concurred with disproportionately high levels of anti-solidarity, despite a dominance of solidarity overall.", "Whether the relationship between anti-solidarity and intensified strains during crises is indeed causal will be the scope of our future research.", "4 6 Conclusion In this paper, we contributed the first large-scale human and automatically annotated dataset labeled for solidarity and its contestation, anti-solidarity.", "The dataset uses the textual material in social media posts to determine whether a post shows (anti-)solidarity with respect to relevant target groups.", "Our annotations, conducted by both trained experts and student crowd-workers, show overall good agreement levels for a challenging novel NLP task.", "We further trained augmented BERT models whose 4 We made sure that the substantial findings reported here are not driven by inherently German (anti-)solidarity discourses.", "Still, our results are bound to the opinions of people posting tweets in the English and German language.", "performance is close to the agreement levels of the experts and which we used for large-scale trend analysis of over 270k media posts before and after the onset of the COVID-19 pandemic.", "Our findings show that (anti-)solidarity statements climaxed momentarily with the first lockdown, but the predominance of solidarity expressions was quickly restored at higher levels than before.", "Solidarity and anti-solidarity statements were balanced by the end of the year 2020, when infection rates were rising.", "The COVID-19 pandemic constitutes a worldwide crisis, with profound economic and social consequences for contemporary societies.", "It manifests yet another challenge for European solidarity, by putting a severe strain on available resources, i.e. national economies, health systems, and individual freedom.", "While the EU, its member countries and residents continued to struggle with the consequences of the Financial Crisis and its aftermath, as well as migration, the COVID-19 pandemic has accelerated the problems related to these former crises.", "Our data suggests that the COVID-19 pandemic has not severely negatively impacted the willingness of European Twitter users to take responsibility for refugees, while financial solidarity with other EU countries remained low on the agenda.", "Over time, however, this form of expressed solidarity became more controversial.", "On one hand, these findings are in line with survey-based, quantitative research and its rather optimistic overall picture regarding social solidarity in the EU during earlier crises (Baglioni et al., 2019; Gerhards et al., 2019; Koos and Seibel, 2019; Lahusen and Grasso, 2018); on the other hand, results from our correlation analysis suggests that severe strains during crises coincide with increased levels of anti-solidarity statements.", "We conclude that a convergence of opinion (Santhanam et al., 2019) among the European Twitter-using public regarding the target audiences of solidarity, and the limits of European solidarity vs. national interests, is not in sight.", "Instead, our widened analytic focus has allowed us to examine pro-social online behavior during crises and its opposition, revealing that European Twitter users remain divided on issues of European solidarity.", "We thank the anonymous reviewers whose comments greatly improved the final version of the paper.", "Ethical considerations.", "We will release only tweet IDs in our final dataset.", "The presented tweets in our paper were paraphrased and/or translated and therefore cannot be traced back to the users.", "No user identities of any annotator (neither expert nor crowd worker) will ever be revealed or can be inferred from the dataset.", "Crowd workers were made aware that the annotations are going to be used in further downstream applications and they were free to choose to submit their annotations.", "While our trained model could potentially be misused, we do not foresee greater risks than with established NLP applications such as sentiment or emotion classification." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "method", "result", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "other", "other", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "other", "other", "other", "other", "other", "other", "other" ]
[ "The task of Dialogue Act Classification (DAC) that purports to capture communicative intent has been studied extensively.", "But these studies limit themselves to text.", "Non-verbal features (change of tone, facial expressions etc.) can provide cues to identify DAs, thus stressing the benefit of incorporating multi-modal inputs in the task.", "Also, the emotional state of the speaker has a substantial effect on the choice of the dialogue act, since conversations are often influenced by emotions.", "Hence, the effect of emotion too on automatic identification of DAs needs to be studied.", "In this work, we address the role of both multi-modality and emotion recognition (ER) in DAC.", "DAC and ER help each other by way of multi-task learning.", "One of the major contributions of this work is a new datasetmultimodal Emotion aware Dialogue Act dataset called EMOTyDA , collected from open-sourced dialogue datasets.", "To demonstrate the utility of EMOTyDA, we build an attention based (self, inter-modal, inter-task) multi-modal, multi-task Deep Neural Network (DNN) for joint learning of DAs and emotions.", "We show empirically that multimodality and multi-tasking achieve better performance of DAC compared to uni-modal and single task DAC variants.", "Dialogue Act Classification (DAC) is concerned with deciding the type i.e., communicative intention (question, statement, command etc.) of the speaker's utterance.", "DAC is very important in the context of discourse structure, which in turn supports intelligent dialogue systems, conversational speech transcription and so on.", "Considerable works have been done on classical Machine Learning (ML) based DAC (Jurafsky et al., 1997), (Stolcke et al., 2000), (Verbree et al., 2006), etc. and Deep The authors have contributed equally.", "Learning (DL) based DAC (Kalchbrenner and Blun-som, 2013), (Papalampidi et al., 2017), (Liu et al., 2017), (Ribeiro et al., 2019), (Ortega et al., 2019), (Saha et al., 2019) etc.", "Humans are emotional entities.", "A speaker's emotional state considerably influences or affects its intended content or its pragmatic content (Barrett et al., 1993).", "An utterance such as Okay sure or Ya right (say) can be considered as agreement orin case of sarcasmdisagreement.", "For expressive DAs such as greeting, thanking, apolo-gizing etc., the speaker's feeling or emotion can assist in recognizing true communicative intent and vice-versa.", "Thus, it is important to consider the speaker's emotion when deciding on the DA.", "There is considerable work on ER (Cowie et al., 2001), (Jain et al., 2018), (Zhang et al., 2018), etc. and adapting the Virtual Agents (VAs) to act accordingly (Huang et al., 2018), (Zhou et al., 2018), (Fung et al., 2018), etc.", "But very little research has been done, that addresses the impact of emotion while deciding the DA of an utterance (Novielli and Strapparava, 2013), (Bosma and Andre, 2004).", "As DAs primarily dictate the flow of any dialogue conversation (be it human-human or human-computer), such synergy of ER and DAC is required.", "Research too has shown the benefit of utilizing the combination of text and nonverbal cues (Poria et al., 2017b), (Poria et al., 2017a) etc., for solving various Natural Language Processing (NLP) tasks.", "The main advantage of integrating other modalities to text is the usage of behavioral signs present in acoustic (vocal modulations) and visual (facial expression) modalities.", "In addition, the various modalities offer important signals to better identify the speaker's communicative intention and emotional state.", "This will in effect help create sturdy and more reliable DAC models.", "In this paper, we study the influence of emotion on the identification of DAs, by utilizing the combination of text, vocal modulations and facial expressions for task-independent conversations.", "DAC is our primary task, assisted by Emotion Recognition (ER) as an auxiliary task.", "We implement an attention based multi-modal, multi-tasking DNN to do joint modeling of DAC and ER.", "Also, we introduce a new dataset to help advance research in multi-modal DAC.", "The key contributions of this paper are as follows: i.", "We curate a new dataset called EMOTyDA for facilitating multi-modal DAC research with high-quality annotations, including emotionally aided cues and conversational context features.", "We believe this dataset will advance research in multi-modal DAC; ii.", "We point to different scenarios where discrepancy in DAC is evident across different modalities, thus, showing the importance of multi-modal approaches to DAC; iii.", "We show using various instances, the usefulness of considering the emotional state of the user while identifying DAs.", "Consequently, we deduce that EMOTyDA will lead to a novel sub-task for future research: emotion aware DAC; iv.", "We propose an attention based (self, inter-modal, inter-task) multitask, multi-modal DNN for jointly optimizing the DAC and ER task and show its benefit over single task DAC variants.", "Through this, we also establish that multi-modal DAC performs significantly better than uni-modal DAC.", "Dialogue Act Frameworks: DAC has been investigated since late 90s (Reithinger and Klesen, 1997), (Stolcke et al., 1998) and early 2000's (Stol-cke et al., 2000), (Grau et al., 2004).", "Much of this research, however, uses chat transcripts with only the text mode, due partly due to unavailability of multi-modal open-source dataset.", "In (Khan-pour et al., 2016), authors apply stacked LSTM to classify speech acts.", "In (Kumar et al., 2018), the author developed a Hierarchical Network based approach using Bi-LSTMs and the CRF.", "A contextual self-attention system fused with hierarchical recurrent units was proposed by the authors of (Ra-heja and Tetreault, 2019) to develop a sequence label classifier.", "The authors of (Yu et al., 2019) proposed a method for the capture of long-range interactions that span a series of words using a Convolutional Network based approach.", "In (Saha et al., 2019), authors proposed several ML and DL based approaches such as Conditional Random Fields, clustering and word embeddings to identify DAs.", "However, all these works identify DAs by utilizing solely the textual modality without the use of emotional cues.", "Emotion aware DAs.", "Within a multi-modal setting, little work is available in the literature to study the impact of emotional state in the evaluation of DAs.", "The effect of integrating facial features as a way of identifying emotion to classify DAs was examined by authors in (Boyer et al., 2011).", "They exhibited their work for tutorial dialogue session typically task-oriented and applied logistic regression to identify DAs.", "But they studied only the cognitive-affecting states such as confusion and flow as the emotional categories to learn DAs.", "In (Novielli and Strapparava, 2013), authors examined the impact of affect analysis in DA evaluation for an unsupervised DAC model.", "The authors made use of lexicon based features from WordNet Affect and SentiWordNet to map them with emotion labels to model the DAs in a LSA based approach.", "Authors of (Ihasz and Kryssanov, 2018), also inspected the impact of emotions mediated with intention or DAs for an in-game Japanese dialogue.", "Their goal was to construct DA-emotion combinations from the pre-annotated corpus.", "However, such stringent associations or dis-associations amongst DA-emotion pairs may not truly hold for real life conversations.", "To facilitate and enhance the research in multimodal DAC assisted with user emotion, we introduce a new dataset (EMOTyDA) consisting of short videos of dialogue conversations manually annotated with its DA along with its pre-annotated emotions.", "To gather potentially emotion rich conversations to explore its affect on DAC, we scanned the literature for existing multi-modal ER dataset.", "During our initial search, we obtained several multi-modal ER datasets which include Youtube (Morency et al., 2011), MOUD (Perez-Rosas et al., 2013), IEMOCAP (Busso et al., 2008), ICT-MMMO (Wollmer et al., 2013), CMU-MOSI (Zadeh et al., 2016), CMU-MOSEI (Zadeh et al., 2018) and MELD (Po-ria et al., 2019) etc.", "However, we zeroed down on IEMOCAP and MELD datasets for the further investigations of our problem statement.", "The reason behind this choice was that remaining all the datasets mentioned above were particularly monologues involving opinions and product reviews.", "Whereas our research requires task-independent dyadic or multi-party conversations to analyze its full potential.", "Both these available datasets are not annotated for their corresponding DAs.", "Also, benchmark DAC datasets such as Switchboard (SWBD) (Godfrey et al., 1992), ICSI Meeting Recorder (Shriberg et al., 2004) consist of text and audio-based conversations whereas TRAINS (Heeman and Allen, 1995) consist of solely text-based conversations with no emotional tags.", "HCRC Map Task corpus (Anderson et al., 1991) additionally encompasses audio modality with the transcripts but the corpus itself has task-oriented conversations and is not annotated for its emotion tags.", "It is to be noted that task-oriented conversations generally restrict the presence of diverse tags which are commonly encountered in task-independent conversations.", "To the best of our knowledge, at the time of writing, we were unaware of any sizable and open-access DA and emotion annotated multi-modal dialogue data.", "Thus, MELD and IEMOCAP datasets have been manually annotated for the corresponding DAs to encourage and promote novel research on multi-modal DACs to build a multi-tasking system that allows DA and emotion for an utterance to be learned jointly.", "Over the years, SWBD-DAMSL tag-set comprising of 42 DAs developed by (Jurafsky, 1997) has been used widely for the task of DAC for task-independent dyadic conversation such as SWBD corpus.", "Thus, we use SWBD-DAMSL tag-set as the base for conceiving tag-set for the EMOTyDA dataset since both these datasets contain task-independent conversations.", "Of the 42 SWBD-DAMSL tags, 12 most commonly occurring tags have been used to annotate utterances of the EMOTyDA dataset.", "The choice of 12 tags is because of the limited length of the EMOTyDA dataset in comparison to the SWBD corpus.", "It stems from the fact that it is highly likely that many of the tags of the SWBD-DAMSL tag-set will never appear in the EMOTyDA dataset due to lesser number of utterances and lower diversity of occurrence of such fine-grained tags.", "The 12 most commonly occurring chosen tags are Greeting (g), Question (q), Answer (ans), Statement-Opinion (o), Statement-Non-Opinion (s), Apology (ap), Command (c), Agreement (ag), Disagreement (dag), Acknowledge", "(a), Backchannel", "(b) and Others (oth).", "For the current work, we have selected a subset of 1039 dialogues from MELD amounting to 9989 utterances and the entire IEMOCAP dataset of 302 dialogues amounting to 9376 utterances to curate EMOTyDA dataset.", "Details of the original MELD and IEMOCAP datasets are provided in the Appendix 6.", "Three annotators who were graduate in English linguistics were accredited to annotate the utterances with the appropriate DAs out of the 12 chosen tags.", "They were asked to annotate these utterances by only viewing the video available considering the dialogue history without the information of the pre-annotated emotion tags.", "This was done so as to assure that the dataset does not get biased by specific DA-emotion pairs.", "The inter-annotator score over 80% was considered as reliable agreement.", "It was determined based on the count that for a given utterance more than two annotators agreed on a particular tag.", "To remove the discrepancy in the number of emotion tags for IEMOCAP and MELD datasets, we mapped the joy tag of the MELD to the happy tag of the IEMOCAP to finally settle on 10 tags from the IEMOCAP for the EMOTyDA dataset.", "The EMOTyDA dataset 1 now comprises of 1341 dyadic and multi-party conversations resulting in a total of 19,365 utterances or annotated videos with the corresponding DA and emotion tags considering the dialogue history.", "The dataset contains approximately 22 hours of recordings.", "Source distribution and major speakers statistics of the dataset are shown in Figures 3a and 3b, respectively.", "Since DAC and ER tasks are known to exploit the contextual features, i.e., dialogue history (Yu et al., 2019) so, utterances in the dataset are accompanied with their corresponding contextual utterances, which are typically preceding dialogue turns by the speakers participating in the dialogue.", "Each of the utterances contains three modalities: video, audio, and text.", "All the utterances are even followed by their speaker identifiers.", "Table 1 shows few utterances along with the corresponding DAs and emotion la-1 The dataset with its DA and emotion tags will be made publicly available to the research community.", "(a)", "(b) Figure 2:", "(a) Incongruent modalities in DAC,", "(b) Importance of emotion in DAC.", "bels from the proposed dataset.", "Distributions of DA and emotion labels across the source datasets are shown in Figure 1a and 1b, respectively.", "In the current work, we seek to analyze the affect of emotion in classifying DAs.", "Also, DAC in text usually involves extra information that can be benefitted from associated modalities.", "Below, we analyze some samples that require emotion aided and multi-modal reasoning.", "We exemplify using few instances from our proposed dataset in order to support our claim of DA often being expressed in a multi-modal way along with exploiting the emotional state of the speaker.", "Role of Emotion.", "In Figure 2b, we present two instances from the dataset where the emotional state of the user seems beneficial in deciding the DA of an utterance.", "In the first example, the reference to the sad and dismal state of the speaker directs it to acknowledge the presence of the hearer.", "In the second case, the angry emotional state of the speaker forces her to disagree with people's opinion or suggestion involved in the conversation.", "The examples above illustrate the importance of having emotional information as emotions affect the communicative intention or DA of the speaker discussed above.", "The presence of emotion in our dataset caters the models with the ability to use additional information while reasoning about DA.", "Role of Multi-modality.", "Figure 2a shows two cases where DA is articulated through incongruity between modalities.", "In the first instance, the facial modality implies anger or fury.", "Whereas the textual modality lacks any visible sign of displeasure, on the contrary it indicates an agreement.", "So, the textual claims does not validate the facial features.", "In the second case, the textual modality hints pure agreement.", "Whereas the audio modality expresses a sarcastic appeal.", "In both these cases, there exists inconsistency between modalities, which acts as a strong indicator that multi-modal information is also important in providing additional cues for identifying DAs.", "The availability of complementary information across multiple modalities improves the model's ability to learn discriminatory patterns that are responsible for this complex process.", "This section describes the proposed multi-task, multi-modal approach followed by the implementation details.", "Textual Features.", "The transcriptions available for each video forms the source of the textual modality 2 .", "To extract textual features, pretrained GloVe (Pennington et al., 2014) embeddings of dimension 300 have been used to obtain representation of words as word vectors.", "The resultant word embeddings of each word are concatenated to obtain a final utterance representation.", "While it is indeed possible to use more advanced textual encoding techniques (for e.g., convolutional or recurrent neural network), we decided to use the same pre-trained extractive strategy as in the case of other modalities.", "Audio Features.", "To elicit features from the audio, openSMILE (Eyben et al., 2010), an open source software has been used.", "The features obtained by openSMILE include maxima dispersion quotients (Kane and Gobl, 2013), glottal source parameters (Drugman et al., 2011), several low-level descriptors (LLD) such as voice intensity, voice quality (for eg., jitter and shimmer), MFCC, voiced/unvoiced segmented features (Drugman and Alwan, 2011), pitch and their statistics (for eg., root quadratic mean, mean etc.), 12 Mel-frequency coefficients etc.", "All the above features are then concatenated together to form a d q = 256 dimensional representation for each window.", "The final audio representation of each utterance ( A ) is obtained by concatenating the obtained d q for every window 2 Original dataset with its video and transcript are downloaded from : https://github.com/ SenticNet/MELD , https://sail.usc.edu/ iemocap/iemocap_release.htm i.e., A R w d q where w represents total window segments.", "Video Features.", "To elicit visual features for each of the f frames from the video of an utterance, we use a pool layer of an ImageNet (Deng et al., 2009), pretrained ResNet-152 (He et al., 2016) image classification model.", "Initially, each of the frames is preprocessed which includes resizing and normalizing.", "So, the visual representation of each utterance ( F ) is obtained by concatenating the obtained d f = 4096 dimensional feature vector for every frame, i.e., F R f d f (Castro et al., 2019), (Illendula and Sheth, 2019), (Poria et al., 2017b), (Poria et al., 2017a).", "The proposed network consists of three main components :", "(i) Modality Enocoders (ME) which primarily takes as input the uni-modal features (ex-tracted above) and produce as outputs the individual modality encodings,", "(ii) Triplet Attention Subnetwork (TAS) that encompasses self, inter-modal and inter-task attention and", "(iii) classification layer that contains outputs of both the tasks (DAC and ER).", "Textual Modality.", "The obtained utterance representation (U) from the extracted textual features (discussed above) is then passed through three different Bi-directional LSTMs (Bi-LSTMs) (Hochre-iter and Schmidhuber, 1997) to sequentially encode these representations into hidden states and learn different semantic dependency based features pertaining to different task, i.e., DAC and ER.", "One Bi-LSTM learns DAC features that are tuned in accordance with the emotion features.", "Second learns features for the ER task regulated by the learning of DA features.", "The third Bi-LSTM learns private features for the task of DAC which is not influenced by the features learnt from emotion.", "For each of these word features, its corresponding forward and backward hidden states h i , h i , respectively, from the forward LST M fd and the backward LST M bd are concatenated to obtain a", "where H R n 2 d .", "d represents the number of hidden units in each LSTM and n is the sequence length.", "Thus, the obtained three hidden state matrices correspond to three Bi-LSTMs, i.e., H 1 , H 2 , H 3 .", "These representations are then passed through three fully connected layers, each of dimension say d c to learn attention of different variants.", "Audio and Video Modalities.", "The audio and video features ( A and F ) extracted are also passed through three fully connected layers, each of dimension say d c , to learn attention of different variants.", "We use a similar concept as in (Vaswani et al., 2017), where the authors proposed to compute attention as mapping a query and a set of key-value pairs to an output.", "The output is estimated as a weighted sum of the values, where the weight assigned to each value is calculated by a compatibility function of the query with its corresponding key.", "So, the representations obtained from each of the modality encoders above which are passed through three fully-connected layers each are termed as queries and keys of dimension d k = d c and values of dimension d v = d c .", "We now have five triplets of ( Q, K, V ) as : ( Q 1 , K 1 , V 1 ) , ( Q 2 , K 2 , V 2 ) , ( Q 3 , K 3 , V 3 ) , ( Q a , K a , V a ) , ( Q v , K v , V v ) where first three triplets are from the textual modality encoder (one each for DA shared, DA private and Emotion shared) 3 followed by one from audio and video encoder each.", "These triplets are then used in different combinations to compute attention scores meant for specific purposes that includes self attention, inter-modal attention and inter-task attention.", "Self Attention.", "We compute self attention ( SA ) for all these triplets by computing the matrix multiplication of all its corresponding queries to its corresponding keys.", "Inter-modal Attention.", "We compute inter-modal attention (IMA) amongst triplets of all the modalities for the multi-task by computing the matrix multiplication of combination of queries and keys of different modalities using Equation 4.", "In this manner, we obtain five IMA scores as IMA v 1 R f n , IMA v 3 R f n , IMA a 1 R w n , IMA a 3 R w n and IMA va R f w .", "3 Subscript 1 , 2 and 3 represent DA shared, DA private and Emotion shared representations, respectively.", "Inter-task Attention.", "We compute inter-task attention (ITA) amongst triplets of different tasks from the textual modality by computing the matrix multiplication of combinations of queries and keys of different tasks using Equation 4.", "In this manner, we obtain three IT A scores as IT A 12 R n n , IT A 21 R n n and IT A 31 R n n .", "This is done in order to learn joint features of an utterance for identification of DAs and emotions.", "Fusion of Attentions.", "We then obtain softmax of all these computed different attention scores to squash them in a range of [0,1] so that the ones having maximum contribution gets the highest probability values and vice-versa.", "We then compute the matrices of attention outputs for different tasks and modalities from the different attention scores as: A = softmax ( Q i K Tj ) V i (5) where A R n d c .", "So, we obtain 13 different attention outputs from its corresponding attention scores which are SA R n d c for SA 1 , SA 2 , SA 3 , SA R w d c for SA a , SA R f d c for SA v , IMA v 1 R f d c , IMA v 3 R f d c , IMA a 1 R w d c , IMA a 3 R w d c , IMA va R f d c , IT A 12 R n d c , IT A 21 R n d c and IT A 31 R n d c .", "Next, we obtain mean of different attention outputs in varying combinations to finally obtain representations for each of the modalities and tasks as MDA 1 , MDA 2 , ME , M v and M a .", "MDA 1 = mean ( SA 1 , IMA va , IT A 12 ) (6) MDA 2 = mean ( SA 2 , IT A 21 ) (7) ME = mean ( SA 3 , IT A 31 ) (8) M v = mean ( SA v , IMA v 1 , IMA v 3 ) (9) M a = mean ( SA a , IMA a 1 , IMA a 3 ) (10) where M R 1 d c .", "Next, we focus on learning appropriate weights to combine these representations to obtain final sentence representation for each of the tasks to be optimized jointly.", "where represents dot product of two vectors.", "Finally, we obtain sentence representation ( S ) for each of the tasks as follows: SDA = MDA 1 + W 1 MDA 2 + W 2 ME (13) SE = ME M v M a (14) 4.2.3 Classification Layer The output, i.e., sentence representation for each of the tasks ( SDA and SE ) from the TAS are connected to a fully-connected layer which in turn consists of the output neurons for both the tasks (DAC and ER).", "The errors computed from each of these channels are back-propagated jointly to the successive prior layers of the model in order to learn the joint features of both the tasks thereby, allowing them to benefit from the TAS layer.", "As the main aim of this study is to learn DA with the help of emotion, the performance of the DAC task also banks on the quality of features learned for the ER task with useful and better features assisting the collective learning process and vice-versa.", "EMOTyDA dataset was divided into two parts of 80% 20% split for train and test set respectively.", "The statistics of the train and test set are shown in Table 2.", "For all the experiments conducted, same train and test sets were employed to allow a fair distinction between all approaches.", "For encoding the textual modality, a Bi-LSTM layer with 200 memory cells was used followed by a dropout rate of 0.1.", "Fully-connected layer of dimension 300 was used in all the subsequent layers.", "The first and the second channel contain 12 and 10 output neurons, respectively, for the DA and the emotion tags.", "Categorical crossentropy loss function is used in both the channels.", "A learning rate of 0.01 was found to be optimum.", "Adam optimizer was used in the final experimental setting.", "All these values are selected after a thorough sensitivity analysis of the parameters.", "experi-Dataset EMOTyDA:dyadic EMOTyDA:multiparty EMOTyDA DA DA + ER DA DA + ER DA DA + ER Modality Acc.", "F1-score Acc.", "F1-score Acc.", "F1-score Acc.", "F1-score Acc.", "F1-score Acc.", "F1-score Text (T) 63.75 60.67 65.23 62.35 46.20 39.23 48.90 41.10 53.56 49.17 53.02 50.22 Audio (A) 32.06 24.95 35.42 38.92 25.76 19.45 26.58 21.01 27.13 23.09 28.65 24.87 Video (V) 35.94 29.71 36.88 30.34 27.23 20.26 28.12 21.03 30.16 26.85 32.09 27.73 T + A 65.43 60.67 66.98 62.08 47.17 40.30 49.42 41.69 54.12 50.00 56.62 51.99 A + V 38.59 34.98 40.07 36.00 27.91 22.76 28.95 23.89 32.09 28.86 33.76 29.13 T + V 67.12 64.14 70.55 68.12 49.80 41.90 51.00 44.52 57.31 53.20 60.88 57.96 T + A + V 66.35 62.30 69.45 67.00 49.02 41.00 50.65 44.00 56.77 52.09 59.86 56.05 T + V (emotional cue) 65.26 60.20 -46.88 39.70 -54.31 50.02 -Table 3: Results of all the baselines and the proposed models in terms of accuracy and F1-score.", "All the reported results are statistically significant Model EMOTyDA (DA + ER) Acc.", "ments segregating dyadic and multi-party conversations as well in addition to the whole dataset for the multi-task framework along with different modalities.", "Additionally, we also provide results of the multi-task framework with its varying combinations of different attentions applied to provide analysis on the effectiveness of each attention for the entire EMOTyDA dataset.", "Along with this, we also include results of some simple baselines such as feature level, hidden state level and hypothesis level concatenation.", "It is to be noted that the purpose of the current work is to examine the effect of emotion while deciding the DA of an utterance from multiple modalities.", "We, therefore, do not focus on enhancements or analysis of the ER task and view it as an auxiliary task aiding the primary task, i.e., DAC .", "In regards to this, the results and findings are reported with respect to only the DAC task and its different combinations.", "Table 3 shows the results of all the various models.", "As visible, the textual modality provides the best results amongst the uni-modal variants.", "The addition of audio and visual features individually improves this uni-modal baseline.", "The combination of visual and textual features achieves the best score throughout all the combinations of the dataset.", "The tri-modal variant is not able to attain the best score supposedly because of suboptimal Figure 5: The visualization of the attention scores for 5 sample utterances for the tri-modal variant.", "Though it still improves the performance compared to all the unimodal baselines.", "Figure 5 shows the heatmap visualization of the tri-modal variant to highlight the contributions of different modalities.", "As is also evident from the results, the multi-task variant performs consistently well throughout all the experiments compared to its single task DAC variant.", "As a baseline, we also show that using emotion as a feature in the single task DAC counterpart doesn't outperform the proposed multi-task variant.", "This shows that the joint optimization of both these tasks boosts the performance of DAC.", "Table 4 shows the results of few simple baselines along with the ablation study of different attentions used in the proposed framework to highlight the importance and effectiveness of each of the attentions used for the whole EMOTyDA dataset.", "As seen from the table, the combinations of all three attention mechanisms, i.e., SA, IMA and ITA, yields the best results, thus, stressing the roles of incorporating across-task and across-modal relationships.", "Figure 6 shows the visualization of the learned weights of different words for a sample utterance for the single task DAC as well as the multi-task model to highlight the importance of incorporat-Utterance True Label MT(T+V) ST (T+V) She is not Larry's girl dag dag s I know, it was amazing!", "ing ER as an auxiliary task.", "The true DA label of the utterance in Figure 6 is disagreement with emotion as anger .", "With the multi-task approach, the attention is laid on appropriate disagreement bearing words whereas with single task, attention is learnt on agreement words such as yes which here has just been used in a sarcastic way to disagree.", "It is also observed that the experiments with dyadic conversations attain better results as compared to multi-party conversations.", "This is supposedly due to the constant change of speakers in multi-party conversations that misleads the classi-fier to learn suboptimal features, thus, stressing on the role of using speaker information as valuable cues for DAC.", "Error Analysis.", "Plausible reasons behind the faults in the DA prediction are as follows :", "(i) Skewed dataset : The occurence of most of the tags in the proposed dataset is very less, i.e., the dataset is skewed as shown in Figure 1a.", "This consistently conforms with real time task-independent conversations where some tags occur less frequently as compared to others;", "(ii) Composite and longer length utterance : Most of the utterances in the dataset are longer in length and is also composite in nature encompassing diversified intentions in a single utterance.", "In such cases, it becomes difficult to learn features for discrete DAs;", "(iii) Mis-classification of emotion labels : Mis-classification of the DAs can be attributed to the mis-classification of the emotions for that particular utterance.", "Some examples for the same are shown in Table 5.", "In this paper, we investigate the role of emotion and multi-modality in determining DAs of an utterance.", "To enable research with these aspects, we create a novel dataset, EMOTyDA, that contains emotion-rich videos of dialogues collected from various open-source datasets manually annotated with DAs.", "Consequently, we also propose an attention based (self, inter-modal, inter-task) multi-modal, multi-task framework for joint optimization of DAs and emotions.", "Results show that multi-modality and multi-tasking boosted the performance of DA identification compared to its unimodal and single task DAC variants.", "In future, conversation history, speaker information, fine-grained modality encodings can be incorporated to predict DA with more accuracy and precision.", "Dr. Sriparna Saha gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award, supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia) for carrying out this research." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "objective", "objective", "method", "result", "result", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "other" ]
[ "Automatic extraction of product attribute values is an important enabling technology in e-Commerce platforms.", "This task is usually modeled using sequence labeling architectures, with several extensions to handle multi-attribute extraction.", "One line of previous work constructs attribute-specific models, through separate decoders or entirely separate models.", "However, this approach constrains knowledge sharing across different attributes.", "Other contributions use a single multi-attribute model, with different techniques to embed attribute information.", "But sharing the entire network parameters across all attributes can limit the model's capacity to capture attribute-specific characteristics.", "In this paper we present AdaTag, which uses adaptive decoding to handle extraction.", "We parameterize the decoder with pretrained attribute embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module.", "This allows for separate, but semantically correlated, decoders to be generated on the fly for different attributes.", "This approach facilitates knowledge sharing, while maintaining the specificity of each attribute.", "Our experiments on a real-world e-Commerce dataset show marked improvements over previous methods.", "The product profiles on e-Commerce platforms are usually comprised of natural texts describing products and their main features.", "Key product features are conveyed in unstructured texts, with limited impact on machine-actionable applications, like search (Ai et al., 2017), recommendation (Kula, 2015), and question answering (Kulkarni et al., 2019), among others.", "Automatic attribute value extraction aims to obtain structured product features from product profiles.", "The input is a textual Most of the work was done during an internship at Amazon.", "sequence from the product profile, along with the required attribute to be extracted, out of potentially large number of attributes.", "The output is the corresponding extracted attribute values.", "Figure 1 shows the profile of a moisturizing cream product as an example, which consists of a title, several information bullets, and a product description.", "It also shows the attribute values that could be extracted.", "Most existing studies on attribute value extraction use neural sequence labeling architectures (Zheng et al., 2018; Karamanolakis et al., 2020; Xu et al., 2019).", "To handle multiple attributes, one line of previous contributions develops a set of attribute-specific models (i.e., one model per attribute).", "The goal is to construct neural networks with (partially) separate model parameters for different attributes.", "For example, one can construct an independent sequence labeling model for each attribute and make predictions with all the models collectively (e.g., the vanilla OpenTag model (Zheng et al., 2018)).", "Instead of totally separate models, one can also use different tag sets corresponding to different attributes.", "These networks can also share the feature encoder and use separate label decoders (Yang et al., 2017).", "However, the explicit network (component) separation in these contributions constrains knowledge-sharing across different attributes.", "Exposure to other attributes can help in disambiguating the values for each attribute.", "And having access to the entire training data for all attributes helps with the generic sequence tagging task.", "Another line for multi-attribute extraction contributions learns a single model for all attributes.", "The model proposed by Xu et al. (2019), for example, embeds the attribute name with the textual sequence, to achieve a single attribute-aware extraction model for all attributes.", "This approach addresses the issues in the previous direction.", "However, sharing all the network parameters with all attributes could limit the model's capacity to capture attribute-specific characteristics.", "In this paper we address the limitations of the existing contribution lines, through adaptive decoder parameterization .", "We propose to generate a decoder on the fly for each attribute based on its embedding.", "This results in different but semantically correlated decoders, which maintain the specific characteristics for each attribute, while facilitating knowledge-sharing across different attributes.", "To this end, we use conditional random fields (CRF) (Lafferty et al., 2001) as the decoders, and parameterize the decoding layers with the attribute embedding through a hypernetwork (Ha et al., 2017) and a Mixture-of-Experts (MoE) module (Jacobs et al., 1991).", "We further explore several pretrained attribute embedding techniques, to add useful attribute-specific external signals.", "We use both contextualized and static embeddings for the attribute name along with its potential values to capture meaningful semantic representations.", "We summarize our contributions as follows: (1) We propose a multi-attribute value extraction model with an adaptive CRF-based decoder.", "Our model allows for knowledge sharing across different attributes, yet maintains the individual characteristics of each attribute.", "(2) We propose several attribute embedding methods, that provide important external semantic signals to the model.", "(3) We conduct extensive experiments on a real-world e-Commerce dataset, and show improvements over previous methods.", "We also draw insights on the behavior of the model and the attribute value extraction task itself.", "The main goal of the task is to extract the corresponding values for a given attribute, out of a number of attributes of interest, from the text sequence of a product profile.", "Formally, given a text sequence X = [ x 1 , . . . , x n ] in a product profile, where n is the number of words, and a query attribute r R , where R is a predefined set of attributes, the model is expected to extract all text spans from X that could be valid values for attribute r characterizing this product.", "When there are no corresponding values mentioned in X , the model should return an empty set.", "For example, for the product in Figure 1, given its title as X , the model is expected to return (Dry, Sensitive) if r = SkinType, and an empty set if r = Color.", "Following standard approaches (Zheng et al., 2018; Xu et al., 2019; Karamanolakis et al., 2020), under the assumption that different values for an attribute do not overlap in the text sequence, we formulate the value extraction task as a sequence tagging task with the BIOE tagging scheme.", "That is, given X and r , we want to predict a tag sequence Y = [ y 1 , . . . , y n ] , where y i { B , I , O , E } is the tag for x i .", "B/E indicates the corresponding word is the beginning/ending of an attribute value, I means the word is inside an attribute value, and O means the word is outside any attribute value.", "Table 1 shows an example of the tag sequence for attribute Scent of a shower gel collection, where orchid, cherry pie, mango ice cream could be extracted as the values.", "The BiLSTM-CRF architecture (Huang et al., 2015) consists of a BiLSTM-based text encoder, and a CRF-based decoder.", "This architecture has been proven to be effective for the attribute value extraction task (Zheng et al., 2018; Xu et al., 2019; Karamanolakis et al., 2020).", "We build our AdaTag model based on the BiLSTM-CRF architecture as we find that the BiLSTM-CRF-based models generally perform better than their BiLSTM-based, BERT-based (Devlin et al., 2019) and BERT-CRF-based counterparts, as shown in 5.", "We introduce the general attribute-agnostic BiLSTM-CRF architecture, which our model is based on, in this subsection.", "Given a text sequence X = [ x 1 , . . . , x n ] .", "We obtain the sequence of word embeddings X = [ x 1 , . . . , x n ] using an embedding matrix W word .", "We get the hidden representation of each word by feeding X into a bi-directional Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) layer with hidden size d h : [ h 1 , . . . , h n ] = BiLSTM ([ x 1 , . . . , x n ]) .", "We use a CRF-based decoder to decode the sequence of hidden representations while capturing the dependency among tags (e.g., I can only be followed by E).", "It consists of a linear layer and a transition matrix, which are used to calculate the emission score and the transition score for the tag prediction respectively.", "Let V = [ B , I , O , E ] be the vocabulary of all possible tags.", "We calculate an emission score matrix P = [ p 1 , . . . , p n ] R 4 n , where P ij is the score for assigning the i -th tag in V to x j .", "This is computed by feeding [ h 1 , . . . , h n ] into a linear layer with parameters [ W , b ] , specifically p i = Wh i + b R 4 , where W R 4 d h and b R 4 .", "For a BIOE tag sequence Y = [ y 1 , . . . , y n ] , we get its index sequence Z = [ z 1 , . . . , z n ] where z i { 1 , 2 , 3 , 4 } is the index of y i in V .", "The score for an input text sequence X to be assigned with a tag sequence Y is calculated as: s ( X, Y ) = s ( X, Z ) = n 1 (cid:88) i =1 T z i z i +1 + n (cid:88) i =1 P z i i , (2) where T R 4 4 is the transition matrix of CRF, such that T ij is the score of a transition from the i -th tag to the j -th tag in V .", "The multi-attribute value extraction task can be thought of as a group of extraction subtasks, corresponding to different attributes.", "While all attributes share the general knowledge about value extraction, each has its specificity.", "The key idea in our proposed model is to dynamically adapt the parameters of the extraction model based on the specific subtask corresponding to the given attribute.", "We use a BiLSTM-CRF (Huang et al., 2015) architecture, where different subtasks, corresponding to different attributes, share the same text encoder to derive a contextualized hidden representation for each word.", "Then the hidden representations of the text sequence are decoded into a sequence of tags with a CRF-based decoder, the parameters of which are generated on the fly based on the attribute embedding.", "In this setup, different subtasks are trained jointly, and different decoders are correlated based on the attribute embedding.", "This facilitates a knowledge-sharing scheme across different attributes.", "Intuitively, this can help with learning generic abilities like detecting value boundary, which is at the core of the extraction process of any attribute.", "At the same time, our model provides each subtask with a customized decoder parameterization, which improves the model's capacity for capturing attribute-specific knowledge.", "Figure 2 presents our overall model architecture, where we equip the BiLSTM-CRF architecture with an adaptive CRF-based decoder.", "In 3.2, we will introduce our adaptive CRF-based decoder which is parameterized with the attribute embedding.", "In 3.3, we will describe how to obtain pretrained attribute embeddings that can capture the characteristics of different subtasks, so that similar attributes get similar decoding layers.", "In attribute value extraction, the model takes the text sequence X with a query attribute r as input, and is expected to predict Y based on both X and r .", "To make the model aware of the query attribute, we need to incorporate the attribute information into some components of the BiLSTM-CRF architecture.", "The BiLSTM-based text encoder is responsible for encoding the text sequence and obtain a contextualized representation for each word, which can be regarded as understanding the sentence.", "The CRF-based decoder then predicts a tag for each word based on its representation.", "Therefore, we propose that all attributes share a unified text encoder so that the representation can be enhanced through learning with different subtasks, and each attribute has a decoder adapted to its corresponding subtask, the parameters of which are generated based on the attribute information.", "As introduced in 2.2, a CRF-based decoder consists of a linear layer and a transition matrix.", "The linear layer takes hidden representations as input, Figure 2: Model architecture.", "and predicts a tag distribution for each word independently.", "It captures most of characteristics of value extraction for a given attribute based on the text understanding.", "More flexibility is needed to model the specificity of different attributes.", "By contrast, the transition matrix learns the dependency among tags to avoid predicting unlikely tag sequence.", "It only captures shallow characteristics for the attribute based on its value statistics.", "For example, the transition scores form B to other tags largely depend on the frequent lengths of the attribute values.", "If single-word values are mentioned more often, then B is more likely to be followed by O.", "If two-word values dominate the vocabulary, then B is more likely to be followed by E.", "Attributes could be simply clustered based on these shallow characteristics.", "In this work we parameterize the CRF-based decoder with the attribute embedding r R d r , where d r is the dimension of the attribute embedding.", "For the linear layer, we adopt a hypernetwork (Ha et al., 2017) due to its high flexibility.", "For the transition matrix, we develop a Mixture-of-Experts (Pahuja et al., 2019) module to leverage the latent clustering nature of attributes.", "We nevertheless experiment with all 4 combinations of these methods in 5.3, and this choice does the best.", "Hypernetwork.", "The idea of hypernetworks (Ha et al., 2017) is to use one network to generate the parameters of another network.", "Such approach has high flexibility when no constraint is imposed during generation.", "We therefore use it to parameterize the linear layer.", "In our model, we learn two different linear transformations that map the attribute embedding to the parameters of the linear layer ( W R 4 d h , b R 4 ) in the CRF-based decoder: W = Reshape ( W w hyper r + b w hyper ) , b = Reshape ( W b hyper r + b b hyper ) .", "Here W w hyper R 4 d h d r , b w hyper R 4 d h , W b hyper R 4 d r , b b hyper R 4 , and the Reshape operator reshapes a 1 -D vector into a matrix with the same number of elements.", "Mixture-of-Experts.", "The idea of Mixture-of-Experts (Jacobs et al., 1991) is to have a group of networks (experts) that jointly make decisions with dynamically determined weights.", "Unlike previous approaches that combine each expert's prediction , we combine their parameters for generating the transition matrix.", "Let k be the number of experts we use to parameterize the transition matrix T R 4 4 where k is a hyperparameter.", "We introduce k learnable matrices T (1) , . . . , T ( k ) for the k experts.", "Each expert's matrix can be understood as a cluster prototype and we employ a linear gating network to compute the probability of assigning the given attribute to each expert: = Softmax ( W moe r + b moe ) .", "Here W moe R k d r , b moe R k , = [ 1 , . . . , k ] R k and (cid:80) ki =1 i = 1 .", "The parameters for the transition matrix for this attribute is calculated as: T = (cid:80) ki =1 i T ( i ) .", "The attribute embedding r plays a key role in deriving the attribute-specific decoding layers.", "Therefore, the quality of the attribute embeddings is crucial to the success of our parameterization method.", "Good attribute embeddings are supposed to capture the subtask similarities such that similar extraction tasks use decoders with similar parameters.", "In this work, we propose to use the attribute name and possible values as a proxy to capture the characteristics of the value extraction task for a given attribute.", "The attribute embeddings can therefore be directly derived from the training data and loaded into the attribute embedding layer as initialization.", "For each attribute r , we first collect all the sentences from the training data that are annotated with at least one value for r .", "We denote the collected sentences with values as D r = { ( r, v i , X i ) } n r i =1 where r is the phrase representation of r (e.g., r = Skin Type if r = SkinType), v i is a span in text sequence X i that serves as the value for r , and n r is the number of collected sentences.", "For each ( r, v i , X i ) , we can calculate an attribute name embedding r name i and an attribute value embedding r value i in either a contextualized way or an uncontextualized way, which are detailed later.", "We pool over all instances in D r to get the final attribute name embedding and attribute value embedding, which are concatenated as the attribute embedding: r name = 1 n r (cid:80) n r i =1 r name i , r value = 1 n r (cid:80) n r i =1 r value i , r = Concat ( r name , r value ) .", "Contextualized Embeddings.", "Taking the context into consideration helps get embeddings that can more accurately represent the semantics of the word.", "Here we use the contextualized representations provided by BERT (Devlin et al., 2019) to generate the embedding.", "We use BERT to encode X i and get v i 's phrase embedding (the averaged embedding of each word in the phrase) as r value i .", "By replacing v i with [BOA] r [EOA] 1 and encoding the modified sequence with BERT, we get the phrase embedding for [BOA] r [EOA] as r name i .", "Uncontextualized Embeddings.", "Static embeddings like Word2Vec (Mikolov et al., 2013) and Glove (Pennington et al., 2014) can be more stable to use under noisy contexts.", "We use Glove (50d) to get the phrase embedding for v i as r value i and the phrase embedding for r as r name i .", "As we parameterize the CRF-based decoder with the attribute embedding through MoE and hypernetwork, the learnable parameters in our model includes encoder = { W word , bi-lstm } , hyper = { W w hyper , b w hyper , W b hyper , b b hyper } , moe = { W moe , b moe , { T ( i ) } ki =1 } .", "We freeze the attribute embeddings W att as it gives better performance, which is also discussed in 5.3.", "where V n is the set of all tag sequences of length n .", "The log likelihood can be computed efficiently using the forward algorithm (Baum and Eagon, 1967) for hidden Markov models (HMMs).", "At inference, we adopt Viterbi algorithm (Viterbi, 1967) to get the most likely Y given X and r in test set.", "To evaluate the effectiveness of our proposed model, we build a dataset by collecting product profiles (title, bullets, and description) from the public web pages at Amazon.com.", "2 Following previous works (Zheng et al., 2018; Karamanolakis et al., 2020; Xu et al., 2019), we obtain the attribute-value pairs for each product using the product information on the webpages by distant supervision.", "We select 32 attributes with different frequencies.", "For each attribute, we collect product profiles that are labeled with at least one value for this attribute.", "We further split the collected data into training (90%) and development (10%) sets.", "The annotations obtained by distant supervision are often noisy so they cannot be considered as gold labels.", "To ensure the reliability of the evaluation results, we also manually annotated an additional testing set covering several attributes.", "We randomly selected 12 attributes from the 32 training attributes, took a random sample from the relevant product profiles for each attribute, and asked human annotators to annotate the corresponding values.", "We ensured that there is no product overlapping between training/development sets and the test set.", "Putting together the datasets built for each individual attribute, we end up with training and development sets for 32 attributes, covering 333,857 and 40,008 products respectively.", "The test set has 12 attributes and covers 11,818 products.", "Table 2 presents the statistics of our collected dataset.", "Table 3 shows the attribute distribution of the training 2 While Xu et al. (2019) released a subset of their collected data from AliExpress.com, their data has a long-tailed attribute distribution (7650 of 8906 attributes occur less than 10 times).", "It brings major challenges for zero-/few-shot learning, which are beyond our scope.", "set.", "It clearly demonstrates the data imbalance issue of the real-world attribute value extraction data.", "Most of the attribute values are usually covered in the title and bullets, since sellers would aim to highlight the product features early on in the product profile.", "The description, on the other hand, can provide few new values complementing those mentioned in the title and bullets, but significantly increases the computational costs due to its length.", "Therefore, we consider two settings for experiments: extracting from the title only ( Title ) and extracting from the concatenation of the title and bullets ( Title + Bullets ).", "For each attribute, we calculate Precision/Recall/F 1 based on exact string matching. That is, an extracted value is considered correct only if it exactly matches one of the ground truth values for the query attribute in the given text sequence. We use Macro-Precision/Macro-Recall/Macro-F 1 (denoted as P/R/F 1 ) as the aggregated metrics to avoid bias towards high-resource attributes. They are calculated by averaging per-attribute metrics.", "strong baselines for attribute value extraction. BiLSTM uses a BiLSTM-based encoder. Each hidden representation is decoded independently into a tag with a linear layer followed by softmax. BiLSTM-CRF (Huang et al., 2015) uses a BiLSTM-based encoder and a CRF-based decoder, as described in 2.2. Zheng et al. (2018) propose OpenTag, which uses a self-attention layer between", "3 We discuss the sizes of different models in Appendix A.", "the BiLSTM-based encoder and CRF-based decoder for interpretable attribute value extraction. However, we find the self-attention layer not helpful for the performance. 4 We therefore only present the results for BiLSTM-CRF in 5. BERT (Devlin et al., 2019) and BERT-CRF replace the BiLSTM-based text encoder with BERT. 5", "Note that these four methods don't take the query attribute as input. To make them work in our more realistic setting with multiple ( N ) attributes, we consider two variants for each of them. (1) N tag sets: We introduce one set of B/I/E tags for each attribute, so that a tag sequence can be unambiguously mapped to the extraction results for multiple attributes. For example, the tag sequence B-SkinType E-SkinType O B-Scent indicates that the first two words constitutes a value for attribute SkinType, and the last word is a value for Scent. Only one model is needed to handle the extraction for all attributes. (2) N models: We build one value extraction model for each attribute we'll train N models for this task.", "The N models variant isolates the learning of different attributes. To enable knowledge sharing, other methods share the model components or the whole model among all attributes: BiLSTM-CRF-SharedEmb shares a word embedding layer among all attributes. Each attribute has its own BiLSTM layer and CRF-based decoder, which are independent from each other. BiLSTM-MultiCRF (Yang et al., 2017) shares a BiLSTM-based text encoder among all attributes. Each attribute has its own CRF-based decoder. SUOpenTag (Xu et al., 2019) encodes both the text sequence and the query attribute with BERT and adopts a cross-attention mechanism to get an attribute-aware representation for each word. The hidden representations are decoded into a tags with a CRF-based decoder.", "We also include AdaTag (Random AttEmb) , which has the same architecture as our model but uses randomly initialized learnable attribute embeddings of the same dimension.", "We implement all models with PyTorch (Paszke et al., 2019). For models involving BERT, we use the bert-base-cased version. Other models use pretrained 50d Glove (Pennington et al., 2014)", "4 We hypothesize that the improvement brought by the self-attention module is dataset-specific. 5 The hidden representation for each word is the average of its subword representations.", "embeddings as the initialization of the word embedding matrix W word . We choose d h = 200 as the hidden size of the BiLSTM layer and 32 as the batch size. BERT-based models are optimized using AdamW (Loshchilov and Hutter, 2019) optimizer with learning rate 2 e 5 . Others use the Adam (Kingma and Ba, 2015) optimizer with learning rate 1 e 3 . We perform early stopping if no improvement in (Macro-) F 1 is observed on the development set for 3 epochs. For our model, we use contextualized attribute embeddings as described in 3.2 and freeze them during training. We set k = 3 for MoE. We made choices based on the development set performance.", "Table 4 presents the overall results using our dataset under both Title and Title + Bullets settings. Our model demonstrates great improvements over baselines on all metrics except getting second best recall under the Title + Bullets settings. The comparisons demonstrate the overall effectiveness of our model and pretrained attribute embeddings.", "The N tag sets variants get much lower performance than other methods, probably due to the severe data imbalance issue in the training set (see Table 3). All attributes share the same CRF-based decoder, which could make learning biased towards high-resource attributes. Note that introducing one set of tags for each entity type is the standard approach for the Named Entity Recognition (NER) task. Its low performance suggests that the attribute value extraction task is inherently different from", "Variants of shared components generally achieve higher performance than the independent modeling methods ( N models), which demonstrates the usefulness of enabling knowledge sharing among different subtasks.", "We also notice that BERT and BERT-CRF models get lower performance than their BiLSTM and BiLSTM-CRF counterparts. The reason could be the domain discrepancy between the corpora that BERT is pretrained on and the product title/bullets. The former consist of mainly natural language sentences, while the latter are made up of integration of keywords and ungrammatical sentences.", "To better understand the gain achieved by joint modeling, we further split the 12 testing attributes into 8 high-resource attributes and 4 low-resource attributes, based on the size of the training data with 1000 instances as the threshold. It is important to point out that many factors (e.g., vocabulary size, value ambiguity, and domain diversity), other than the size of training data, can contribute to the difficulty of modeling an attribute. Therefore, the performance for different attributes is not directly comparable. 6", "From results in Table 5, we can see that our model gets a lot more significant improvement from the independent modeling approach (BiLSTM-CRF ( N models)) on low-resource attributes compared to high-resource attributes. This suggests that low-resource attributes benefit more from knowledge sharing, making our model desirable in the real-world setting with imbalanced attribute distribution.", "Attribute Embeddings. We study different choices of adopting pretrained attribute embed-6", "embed-6 Some low-resource attributes (e.g., BatteryCellCompo-sition) have small value vocabulary and simple mentioning patterns. Saturated performance on them pull up the metrics.", "dings. Specially, we experiment with contextualized embeddings (BERT name+value ) and uncontextualized embeddings (Glove name+value ) under the Title setting. For given attribute embeddings, we can either finetune them during training or freeze them once loaded. We also experiment with attribute name embeddings r name and attribute value embeddings r value only to understand which information is more helpful. The baseline is set as using randomly initialized learnable attribute embeddings. Table 6 shows the results. Comparing attribute embeddings with the same dimension, we find that freezing pretrained embeddings always leads to performance gain over the random baseline. This is because our parameterization methods have high flexibility in generating the parameters for the decoder. Using pretrained embeddings and freezing them provides the model with a good starting point and makes learning easier by reducing the degree of freedom. BERT name (freeze) outperforms BERT value (freeze), suggesting that the attribute name is more informative in determining the characteristics of the value extraction task on our dataset, where the values labeled through distant supervision are noisy.", "design choices for parameterizing the CRF-based decoder. For designs involving MoE, we search the number of experts ( k ) in [1 , 2 , 3 , 4 , 5] and adopt", "the best one to present the results.", "We experiment under the Title setting.", "From Table 7, we find that parameterizing the linear layer with MoE leads to much lower performance.", "This is reasonable because the linear layer plays a much more important role in the decoder while the transition matrix acts more like a regularization to avoid bad tag sequences.", "MoE uses k matrices as basis and expects to represent the parameters for any attribute as a linear combination of the bases.", "That limits the expressiveness to capture complicated characteristics of different attributes and will thus severely hurt the performance.", "As for the transition matrix, modeling with MoE is a better choice.", "This is because the transition matrix is more structured in the sense that each of it element is expected to be either a big number or a small number based on its semantics.", "For example, the transition score for I E should be much higher than I B. Hypernetwork is too flexible to generate such structured parameters.", "An important motivation of our model is that joint modeling of different attributes can facilitate knowledge sharing and improve the performance.", "Here we study the performance of model improvement along with increment of the number of jointly modeled attributes.", "We experiment under the Title setting.", "We start with training our model on 12 attributes that have test data.", "After that, we random select 5, 10, 15, 20 attributes from the remaining attributes, and add them to the joint training.", "The evaluation results on 12 test attributes are presented in Figure 3.", "While our model general demonstrates greater improvement with joint modeling of more attributes, other models' performance fluctuate or goes down.", "That also demonstrates the scalability of our model when new attributes keep emerging in real-world scenarios.", "Attribute Value Extraction.", "OpenTag (Zheng et al., 2018) formulates attribute value extraction as a sequence tagging task, and proposes a BiLSTM-SelfAttention-CRF architecture to address the problem.", "Xu et al. (2019) propose an attribute-aware setup, by utilizing one set of BIO tags and attribute name embedding with an attention mechanism, to enforce the extraction network to be attribute comprehensive.", "Karamanolakis et al. (2020) additionally incorporate the product taxonomy into a multitask learning setup, to capture the nuances across different product types.", "Zhu et al. (2020) introduce a multi-modal network to combine text and visual information with a cross-modality attention to leverage image rich information that is not conveyed in text.", "Wang et al. (2020) use a question answering formulation to tackle attribute value extraction.", "We adopt the extraction setup in our model as most of previous contributions, using sequence labeling architecture.", "But we utilize an adaptive decoding approach, where the decoding network is parameterized with the attribute embedding.", "Dynamic Parameter Generation.", "Our model proposes an adaptive-based decoding setup, parameterized with attribute embeddings through a Mixture-of-Experts module and a hypernetwork.", "Jacobs et al. (1991) first propose a system composed of several different expert networks and use a gating network that decides how to assign different training instances to different ex-perts.", "Alshaikh et al. (2020); Guo et al. (2018); Le et al. (2016); Peng et al. (2019) all use do-main/knowledge experts, and combine the predictions of each expert with a gating network.", "Unlike these works, we combine the weights of each expert to parameterize a network layer given an input embedding.", "Ha et al. (2017) propose the general idea of generating the parameters of a network by another network.", "The proposed model in Cai et al. (2019) generates the parameters of an encoder-decoder architecture by referring to the context-aware and topic-aware input.", "Suarez (2017) uses a hypernetwork to scale the weights of the main recurrent network.", "Platanios et al. (2018) tackle neural machine translation between multiple languages using a universal model with a contextual parameter generator.", "In this work we propose a multi-attribute value extraction model that performs joint modeling of many attributes using an adaptive CRF-based decoder.", "Our model has a high capacity to derive attribute-specific network parameters while facilitating knowledge sharing.", "Incorporated with pretrained attribute embeddings, our model shows marked improvements over previous methods.", "This work has been supported in part by NSF SMA 18-29268.", "We would like to thank Jun Ma, Chenwei Zhang, Colin Lockard, Pascual Martnez-Gomez, Binxuan Huang from Amazon, and all the collaborators in USC INK research lab, for their constructive feedback on the work.", "We would also like to thank the anonymous reviewers for their valuable comments." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "objective", "method", "objective", "abstain", "objective", "result", "abstain", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "other", "method", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "other", "result", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "objective", "abstain", "result", "other", "other", "other" ]
[ "Most dominant neural machine translation (NMT) models are restricted to make predictions only according to the local context of preceding words in a left-to-right manner.", "Although many previous studies try to incorporate global information into NMT models, there still exist limitations on how to effectively exploit bidirectional global context.", "In this paper, we propose a C onfidence B ased B idirectional G lobal C ontext A ware (CB-BGCA) training framework for NMT, where the NMT model is jointly trained with an auxiliary conditional masked language model (CMLM).", "The training consists of two stages: (1) multi-task joint training; (2) confidence based knowledge distillation.", "At the first stage, by sharing encoder parameters, the NMT model is additionally supervised by the signal from the CMLM decoder that contains bidirectional global contexts.", "Moreover, at the second stage, using the CMLM as teacher, we further pertinently incorporate bidirectional global context to the NMT model on its unconfidently-predicted target words via knowledge distillation.", "Experimental results show that our proposed CBBGCA training framework significantly improves the NMT model by +1.02, +1.30 and +0.57 BLEU scores on three large-scale translation datasets, namely WMT'14 English-to-German, WMT'19 Chinese-to-English and WMT'14 English-to-French, respectively.", "In recent years, Neural Machine Translation (NMT) has achieved great progress and attracted more attention.", "Most dominant NMT models mainly adopt an encoder-decoder framework (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017; Meng and Zhang, 2019; Song et al., 2019; This work is done when Chulun Zhou was interning at Pattern Recognition Center, WeChat AI, Tencent Inc, China. * Corresponding author Miao et al., 2021) with the teacher-forcing strategy (Goodfellow et al., 2016) for training.", "Despite its success, the unidirectional property of teacher-forcing strategy restricts NMT models to only focus on the local context, i.e., the preceding words of the to-be-predicted target word at each decoder step.", "Apparently, this strategy tends to be limited because word dependencies are always bidirectional involving both preceding and succeeding words on the target side.", "To address this issue, many previous researches attempt to exploit global information on the target side (Liu et al., 2016; Zhang et al., 2016; Serdyuk et al., 2018; Zhang et al., 2018; Su et al., 2018; Zhang et al., 2019a,b; Su et al., 2019; Zhou et al., 2019; Zhang et al., 2020).", "Typically, they introduce the modelling of target-side global context in the reverse direction by pairing the conventional left-to-right (L2R) NMT model with a right-to-left (R2L) auxiliary model.", "However, in these methods, the modelling of reverse global context is separate from the local context of preceding words.", "Thus, they cannot sufficiently encourage the NMT model to exploit bidirectional global context (Devlin et al., 2019).", "Meanwhile, some of them adopt bidirectional decoding, which often relies on multi-pass decoding or specially customized decoding algorithms (Liu et al., 2016; Zhang et al., 2018; Zhou et al., 2019; Zhang et al., 2020).", "Another series of studies (Conneau and Lample, 2019; Edunov et al., 2019; Weng et al., 2020; Baziotis et al., 2020; Yang et al., 2020; Chen et al., 2020) resort to leveraging target-side bidirectional global context contained in large-scale pre-trained language models (PLM), such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019).", "These PLMs are normally not bilingual-aware for translation and trained independently of the NMT model.", "As a special case, Chen et al. (2020) design a conditional masked language modelling objective to make BERT aware of source input during the fine-2878 0% 5% 10% 15% 20% 25% 30% P r o p o r t i o n o f t o t a l t a r g e t w o r d s 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Predicted probabilities to the target ground-truths 5.07 6.20 8.77 22.31 12.00 25.67 4.82 4.44 4.54 6.19 Figure 1: The distribution of the NMT-predicted probabilities to the corresponding ground-truth words on the training set of WMT'14 English-to-German translation task, which is output by a fully-trained Transformer model using teacher-forcing strategy.", "tuning stage.", "Nevertheless, in these approaches, the pre-trainings of PLMs are independent of NMT models, limiting the potential of model performance.", "As for how to effectively incorporate global information into NMT models, another notable de-ficiency of previous work is that they do not pertinently enhance the NMT model according to its word-level prediction confidence.", "Ideally, under the teacher-forcing strategy, a well-trained NMT model should assign high probabilities to the target ground-truth words based on correct previous words, which, however, is not the case.", "Figure 1 depicts the predicted word-level probabilistic distribution of a fully-trained Transformer model.", "We find that, even based on totally correct preceding words, there is a considerable portion of target ground-truth words that the model predicts with relatively low probabilities.", "The reasonable cause of this phenomenon is that the NMT model cannot confi-dently predict these target words according to only the local context of preceding words (Watanabe and Sumita, 2002; Hoang et al., 2017).", "Hence, we should especially refine the NMT model on these unconfidently-predicted target words.", "In this paper, we propose a C onfidence B ased B idirectional G lobal C ontext A ware (CBBGCA) training framework for NMT.", "Under our framework, the NMT model is jointly trained with a conditional masked language model (CMLM) which is essentially bilingual-aware and contains bidirectional global context on the target side.", "Specifically, the CBBGCA training consists of two stages.", "At the first stage, we jointly train the NMT model and CMLM in a multi-task learning manner by sharing the encoders of the two models.", "This preliminarily enhances the NMT model because the encoder is additionally supervised by the signal from the CMLM decoder that contains bidirectional global context.", "At the second stage, we employ the CMLM to pertinently refine the training of the NMT model on those unconfidently-predicted target words via confidence based knowledge distillation.", "By doing so, our model can be further encouraged to effectively leverage the bilingual-aware bidirectional global context contained in the CMLM.", "To sum up, the major contributions of our paper are as follows: We introduce multi-task learning to benefit the NMT model by sharing its encoder with an auxiliary CMLM, which preliminarily enhances the NMT model to capture bidirectional global context.", "We further propose confidence based knowledge distillation using the CMLM as teacher to especially refine the NMT model on unconfidently-predicted target words, more effectively exploiting the bidirectional global contextual information.", "Extensive experiments on large-scale WMT'14 English-to-German, WMT'19 Chinese-to-English and WMT'14 English-to-French translation tasks show that our CBBGCA training framework respectively improves the state-of-the-art Transformer model by +1.02, +1.30 and +0.57 BLEU points, which demonstrate the effectiveness and generalizability of our approach.", "In this section, we will introduce our proposed CBBGCA training framework that employs a CMLM to enhance the NMT model according to its prediction confidence.", "In the following subsections, we first describe the basic architectures of our NMT model and CMLM.", "Then, we introduce the training procedures of our CBBGCA framework, involving two stages.", "Both the NMT model and CMLM are based on Transformer (Vaswani et al., 2017), which is essen-2879", "tially an attentional encoder-decoder framework.", "1 2.1.1 Encoder The encoders of the NMT model and the CMLM are identical, which are mainly used to learn the semantic representations of the source sentence.", "Generally, the encoder consists of L e identical layers, each of which contains two sub-layers: a self-attention (SelfAtt) sub-layer and a position-wise feed-forward network (FFN) sub-layer.", "The SelfAtt sub-layer takes the hidden states of the previous layer as inputs and conducts multi-head scaled dot-product attention.", "Let h ( l ) denote the hidden states of the l -th encoder layer, the SelfAtt sub-layer can be formulated as c ( l ) = AN(SelfAtt( h ( l 1) , h ( l 1) , h ( l 1) )) , (1) where AN( ) denotes the AddNorm , i.e., layer normalization with residual connection.", "Afterwards, the FFN sub-layer is applied, h ( l ) = AN(FFN( c ( l ) )) .", "Note that h (0) is initialized as the embedding sequence of the source sentence and the hidden states of the L e -th layer h ( L e ) are used as the final word-level representations of the source sentence.", "The decoders of the NMT model and the CMLM are similar except their self-attention mechanisms and prediction manners.", "The NMT Decoder.", "It is comprised of L d identical layers with each having three sub-layers: a masked self-attention (MaskSelfAtt) sub-layer, a cross-attention (CrossAtt) sub-layer and an FFN sub-layer.", "Particularly, to preserve the autoregressive property at each time step, the MaskSelfAtt sub-layer performs self-attention with an attention mask that prevents the decoder from seeing succeeding words.", "To generate the hidden states s ( l ) of the l -th decoder layer, the MaskSelfAtt sub-layer can be formulated as a ( l ) = AN(MaskSelfAtt( s ( l 1) , s ( l 1) , s ( l 1) )) .", "1 Please note that our framework can also be adapted to other NMT models.", "Next, the FFN sub-layer maps z ( l ) into s ( l ) : s ( l ) = AN(FFN( z ( l ) )) .", "Finally, with the source sentence x , the target translation y <t and the learned top-layer hidden states s , the decoder models the probability distribution over the target vocabulary at the t -th time step as follows:", "The CMLM Decoder.", "Typically, it predicts a set of masked target words y m given the source sentence x and the set of observable target words y o .", "The CMLM decoder also contains L d identical layers, each of which also includes a SelfAtt sub-layer, a CrossAtt sub-layer, and an FFN sublayer.", "Unlike the MaskSelfAtt sub-layer of the NMT decoder, the attention mask is removed in the SelfAtt sub-layer of the CMLM decoder.", "Finally, with the learned top-layer hidden states s (cid:48) of the CMLM decoder, the predicted probability distribution for every masked target word y t y m can be formalized as p ( y t | y o , x ) = softmax( W (cid:48) s (cid:48) t ) , (7) where W (cid:48) is the learnable parameter matrix of the linear transformation.", "Note that since the CMLM decoder takes y o rather than y <t as input, which includes both preceding and succeeding words with respect to every masked target word, it should contain bidirectional global contextual information.", "The training of CBBGCA framework involves two stages.", "At the first stage, we jointly train the NMT model and CMLM by multi-task learning.", "At the second stage, according to the word-level prediction confidence, we employ the CMLM to refine the training of the NMT model through knowledge distillation.", "In the first training stage, given a batch of training instances, we jointly train the NMT model and CMLM by simultaneously optimizing their respective objectives:", "L 1 ( e , nd , cd ) = L nmt +(1 ) L cmlm , (8)", "where is a balancing hyper-parameter and e , nd and cd denote the parameters of the shared encoder, the NMT decoder and the CMLM decoder, respectively.", "Very importantly, during this procedure of joint training, we share the same encoder for the NMT model and CMLM.", "In this manner, the NMT model benefits from the multi-task learning because the encoder is additionally supervised by the signal from the CMLM decoder which contains bidirectional global context.", "Besides, we adopt the strategy used in (Ghazvinine-jad et al., 2019) to optimize the CMLM.", "Concretely, we randomly select n words, where n uniform (1 , | y | ) , and replace each word with a special token [M], splitting y into y o and y m .", "Formally, we optimize the CMLM by minimizing the following objective for every word in y m : L cmlm ( e , cd ) = (cid:88) y t y m log p ( y t | y o , x ) .", "At the second stage, once we obtain the two fully-trained models, we use the CMLM to further refine the training of the NMT model through knowledge distillation (KD).", "The reason why we introduce such a KD-based model training is that the conventional NMT model predicts a considerable portion of target ground-truth words with relatively low probabilities, as shown in Figure 1.", "This phenomenon indicates that the NMT model cannot con-fidently predict these target words based on only local context of preceding words.", "Therefore, we aim to pertinently distill the knowledge of CMLM into the NMT model on these unconfidently-predicted target words because the CMLM contains bilingual-aware bidirectional global context.", "Figure 2 depicts the training procedure of this stage with an illustrative example.", "Given the source sentence x and the preceding ground-truth words y <t at each time step t , we first let the NMT model make predictions for every target word using Equation 6, producing word-level probability distributions p 1 , p 2 , ..., p | y | .", "Then, we determine the word set y m where the predicted probabilities p t to the corresponding ground-truth words are lower than a threshold value (cid:15) , y m = { y t | p t (cid:15), 1 t | y |} .", "Next, we obtain the set y o of partially observable target words by replacing those selected ground-truth words with a special token [M].", "Subsequently, we feed y o to the CMLM and obtain its predicted probability distribution q t for every word in y m using Equation 7. To pertinently refine the NMT model on the set y m of its unconfidently-predicted target words, we use the CMLM with fixed parameters as teacher and transfer its knowledge to the NMT model.", "Along with the supervision from the corresponding ground-truth words, we optimize the NMT model with a balancing factor through the following 2881 objective: L kd ( ne , nd )= (cid:88) y t y m { KL( q t || p t ) (1 )log p t } , (12) where ne , nd and KL( ) represent the NMT encoder parameters, the NMT decoder parameters and the KullbackLeibler divergence (Sohn et al., 2015), respectively.", "Here, we follow (Clark et al., 2019) to linearly decrease the factor from 1 to 0 throughout training.", "This guides the NMT model to absorb more knowledge from the CMLM at the early period of the stage 2 and gradually re-focus on the ground-truth words to learn better.", "Finally, the total training objective of this stage is as follows: L 2 ( ne , nd ) = L kd ( ne , nd ) (cid:88) y t y o \\ [M] log p t , (13) where y o \\ [M] represents y o excluding all special tokens [M].", "By doing so, we can fully strengthen the ability of the NMT model to leverage the bilingual-aware bidirectional global context contained in the CMLM.", "Note that the CMLM is not involved at the inference time.", "We carry out experiments on three large-scale translation tasks, WMT'14 English-to-German (En De), WMT'19 Chinese-to-English (Zh En) and WMT'14 English-to-French (En Fr).", "The data are preprocessed using Byte-Pair-Encoding 2 (Sennrich et al., 2016) (BPE).", "More dataset statistics and the detailed preprocessing procedures are described in Appendix A. 3.2 Implementation Details We follow the settings used in (Vaswani et al., 2017) to build the NMT model under Transformer-base configuration.", "Concretely, the Transformer-base architecture is comprised of 6 encoder and decoder layers, each with 512 as hidden size, the FFN sublayers of 2,048 dimension and 8 heads in multihead attentions.", "For more details about the training and inference, please refer to Appendix B. 3.3 Hyper-parameters Apart from all the hyper-parameters we empirically set based on previous experience, the balancing 2 https://github.com/rsennrich/subword-nmt 26.2 26.4 26.6 26.8 27.0 The balancing factor BLEU s c o re 0.5 0.6 0.8 0.9 1.0 0.7", "factor in Equation 8 and the confidence threshold (cid:15) for determining y m in Equation 13 are the hyper-parameters we need to manually tune on the validation set.", "To balance the training of the NMT model and CMLM, we select the minimum that can bring steady improvements to the NMT model within 200,000 steps.", "As shown in Figure", "3(a), we gradually vary from 0 .", "5 to 1 .", "0 with an increment of 0 .", "1 and evaluate the performance on the validation set.", "We find that the NMT model achieves its peak when = 0 .", "7 .", "Hence, is set to 0 .", "7 for the joint training of the two models at the first stage.", "Given the selected , at the second training stage, we also investigate the impact of (cid:15) on the validation set.", "We adjust its value from 0 .", "0 to 0 .", "3 with an interval of 0 .", "05 .", "As shown in Figure", "3(b), the NMT model performs the best when the (cid:15) is 0 .", "2 .", "Therefore, we set (cid:15) = 0 .", "2 as the confidence threshold for the second training stage.", "In our experiments, CBBGCA is the system under our proposed training framework, as described in Section 2.2.", "Multi-300k denotes the baseline system that jointly optimizes the NMT model and CMLM by sharing their encoders throughout the whole 300k training steps, which is used to make comparison with conducting CBKD at the second training stage.", "For evaluation, in addition to the widely used BLEU (Papineni et al., 2002), we also adopt the Comet (Rei et al., 2020) which is recently a more welcomed metric.", "Results on WMT'14 En De.", "Table 1 lists several existing competitive NMT systems and ours.", "First, we can see that Multi-300k surpasses Transformer by +0.58 BLEU and +0.0101 Comet scores.", "Moreover, the BLEU and Comet scores 2882 System BLEU Comet Existing Systems Transformer (Vaswani et al., 2017) 27.30 -Rerank-NMT (Liu et al., 2016) 27.81 -ABD-NMT (Zhang et al., 2018) 28.22 -FKD-NMT (Zhang et al., 2019a) 27.84 -SB-NMT (Zhou et al., 2019) 29.21 -DBERT-NMT (Chen et al., 2020) 27.53 Our Systems Transformer 27.30 0.2602 Multi-300k 27.88 0.2703 CBBGCA 28.32 (cid:5) 0.2828 (cid:5) Table 1: BLEU (%) and Comet scores on WMT'14 En De.", "of CBBGCA are respectively +1.02 and +0.0226 higher than Transformer, verifying that CBKD at the second training stage brings further improvement.", "Next, our proposed framework outperforms most recent competitive models.", "Specifically, CBBGCA yields a better result than ABD-NMT (Zhang et al., 2018) and FKD-NMT (Zhang et al., 2019a) that only provide the NMT model with unidirectional future contexts.", "This proves the power of incorporating target-side bidirectional global context into the NMT model.", "Note that ABD-NMT needs two-pass decoding, which first obtains reverse state sequence by a backward decoder and then uses the forward decoder to generate final translations.", "The only exception is SB-NMT (Zhou et al., 2019) that designs elaborately cum-stomized bidirectional decoding algorithms, which is actually not fairly comparable to ours because of its decoding manners and the involvement of synthetic training data.", "3 These results demonstrate the effectiveness of our proposed training framework.", "Results on WMT'19 Zh En and WMT'14 En Fr.", "From Table 2, the Multi-300k preliminarily gains +0.35 and +0.29 BLEU scores over Transformer on Zh En and En Fr, respectively.", "Moreover, our CBBGCA further strongly outperforms Multi-300k and achieves a total of +1.30 and +0.57 BLEU score improve-3 The result of SB-NMT is the reported performance with checkpoint averaging technique from (Zhou et al., 2019).", "ments over Transformer on the two datasets, respectively.", "In term of Comet, comparing different models, we can see similar results.", "Note that the sizes of WMT'19 Zh En (20M) and WMT'14 En Fr (36M) datasets are much larger than that of WMT'14 En De (4.5M) dataset, demonstrating the effectiveness of our proposed framework on various language pairs.", "To fully investigate each part of our proposed training framework, we conduct ablation studies on WMT'14 En De translation task.", "Table 3 reports the ablation results on the test set.", "We first validate the necessity of our two-stage strategy by only training the model using the multitask joint training.", "w/ . Dynamic means the weights in Equation 8 are dynamically adjusted.", "Specifically, we linearly increase from 0 .", "5 to 1 .", "0 throughout the whole training process.", "We can see that its performance is just slightly higher than the fixed-weight Multi-300k and still significantly inferior to the two-stage CBBGCA.", "For the feasible options in two-stage strategy, w/o . ShareEnc represents not sharing encoders at the first training stage and its performance decreases by 0 .", "56 BLEU score.", "This shows that the NMT encoder is enhanced by the joint training 2883 with the CMLM.", "As for w/o . CBKD 4 , which means not performing KD on any target words at the second training stage (i.e., =0 ), its performance also decreases by 0 .", "40 BLEU score.", "This demonstrates the effect of pertinently incorporating bidirectional global context into the NMT model on its unconfidently-predicted target words.", "In our training framework, for each sentence pair, we adopt KD to transfer the knowledge of the CMLM into the NMT model only on the word set y m .", "The set is determined by masking k target words whose NMT-predicted probabilities to the corresponding ground-truths are lower than a threshold (cid:15) .", "Obviously, there are alternative strategies for the above process.", "Therefore, we further investigate the following variants: Random : Regardless of confidence, we randomly select k words of a target sentence to be masked for the CMLM.", "NMT-High : As a contrast, we mask the target words whose NMT-predicted probabilities to the ground-truths are higher than 1 (cid:15) .", "NMT-Wrong : We mask the target words where the predictions of the NMT model do not coincide with the corresponding ground-truths.", "All-at-Once : In this variant, to validate the necessity of selectively distilling knowledge on a portion rather than all of target words, we generate CMLM-predicted probability distributions for all target words.", "As an extreme case, we mask all target words at once with only source sentences as input to the CMLM.", "Part-to-All : Instead of masking all target words at once, we generate the CMLM-predicted probability distributions in a part-to-all way.", "Concretely, we first generate the NMT-predicted probability distributions for all words.", "Then, all target words are divided into several non-overlapping subsets, each corresponding to a certain probability interval.", "For each time, we mask a subset of target words whose probabilities to the ground-truths are located within the corresponding interval.", "4 It differs from Multi-300k in that the CMLM is not optimized at the second stage (200k 300k steps) in w/o . CBKD.", "In Multi-300k, between 200k 300k steps, the CMLM and NMT model continue to be jointly optimized by sharing their encoders.", "Particularly, because the hyper-parameter (cid:15) is 0 .", "2 in CBBGCA, we have a total of 5 iterations and the intervals are set to [0 . 0 , 0 . 2] , [0 . 2 , 0 . 4] , [0 . 4 , 0 . 6] , [0 . 6 , 0 . 8] , [0 . 8 , 1 . 0] .", "Table 4 lists the results with different KD strategies.", "We can observe that all these variants are inferior to our CBBGCA method.", "Particularly, the results of Random and NMT-High indicate that conducting knowledge distillation on either randomly selected or confidently-predicted target words is less effective than on those unconfidently-predicted ones.", "Next, the result of NMT-Wrong is lower than CBBGCA.", "It may be due to the fact that the NMT model assigns low probabilities to some correctly-predicted target words.", "Thus, the NMT model fails to absorb the beneficial knowledge from CMLM on these words.", "Lastly, All-at-Once and Part-to-All represent two approaches to generate CMLM-predicted probability distributions for all target words.", "It is reasonable for All-at-Once to obtain a a worse performance since the CMLM cannot predict well without any observable word on the target side.", "For Part-to-All, we can see it improves the NMT model over Multi-300k but is still worse and takes more computational cost than CBBGCA.", "This also echoes the finding in NMT-High that applying KD on confidently-predicted target words is not optimal.", "All these results demonstrate that it is crucial for the NMT model to pertinently exploit bidirectional global contexts on its unconfidently-predicted target words.", "We also investigate the change of model confidence with respect to target ground-truth words on the", "training set.", "Table 5 lists the percentage of tokens within each interval, in terms of NMT-predicted probability.", "Because the probability higher than 0 .", "5 must be the maximum across the vocabulary, we group 0 .", "5 1 .", "0 as a whole high-confidence interval while the others are low-confidence intervals.", "From the table, we can observe that the number of tokens in low-confidence intervals drops.", "For instance, the number of tokens locating in [0 . 0 , 0 . 2] becomes 0 .", "69% fewer, which is a notable change considering that the WMT'14 En De training set contains roughly 4.5 million sentences with a total of approximately 140 million tokens.", "This indicates that the NMT model becomes more confident about the target ground-truth words.", "There are also some researches (Edunov et al., 2019; Conneau and Lample, 2019; Yang et al., 2020; Weng et al., 2020; Chen et al., 2020) that focus on incorporating large-scale PLMs into the NMT model.", "Different from these approaches that require pre-training on massive external data, in this work, the integration of CMLM into the training procedure is to directly provide the NMT model with target-side bidirectional global context without external data.", "To show that our proposed training framework is compatible and orthogonal to existing approaches involving large-scale PLMs, we conduct experiments with an external Roberta model (Liu et al., 2019) in Appendix C. 4.5 Comparison over Stronger Systems To further validate our proposed training framework, we conduct experiments over stronger systems.", "Particularly, we first compare our proposed training framework with (Baziotis et al., 2020) that used a bidirectional LM as prior to regularize the NMT model, which is similar to our method to some extent.", "Following their setting, we train a target-side language model using a 6-layer Transformer decoder.", "Then, it is used as the teacher model to impose soft constraints on the output of the NMT model.", "The upper rows of Table 6 System BLEU Transformer 25.54 ref.", "give the comparison results.", "We can see that our CBBGCA still outperforms Transformer + LM prior.", "In addition, back-translated data are often used to boost NMT models.", "We also involve additional back-translated data during training.", "Specifically, we use an in-house English-to-Chinese Transformer-base model to translate the English sentences in the WMT'19 Zh En training set.", "Then, we add these back-translated data to the WMT'19 Zh En training set.", "The lower rows of Table 6 lists the performance of our models under this setting.", "Similar to previous results, it shows that both of Multi-300k and CBBGCA consistently improve the NMT model on the BT-augmented WMT'19 Zh En, demonstrating the the effectiveness of our proposed training framework.", "In Appendix D, we give an illustrative example on the WMT'19 Zh En test set to show the improvements of our model.", "This line of research aims at modelling target-side global context in the reverse direction with an auxiliary model.", "Liu et al. (2016) first adopt L2R and R2L NMT models to independently generate translations through beam search, then re-rank the candidate list via their agreement.", "Zhang et al. (2018) employ a backward decoder to capture reverse target-side context, which is then exploited by the forward decoder.", "Serdyuk et al. (2018) pro-2885 pose twin networks, where the forward network is encouraged to generate hidden states similar to those of the backward network.", "Zhang et al. (2019a) present a future-aware knowledge distillation framework enabling the unidirectional decoder to explore the future context for word predictions.", "Zhou et al. (2019) propose a synchronous bidirectional NMT model with revised beam search algorithm that involves interactive L2R and R2L decodings.", "Zhang et al. (2019b) also combine the L2R and R2L NMT models by considering their agreement, helping the model to generate sentences with better prefixes and suffixes.", "Although these methods indeed gain some improvements, the modelling of reverse global context is independent of the local context of preceding words.", "Meanwhile, they usually rely on elaborately designed mechanisms for burdensome multi-pass decoding.", "Another line of research is to exploit global contextual information contained in large-scale pre-trained language models (PLM) via knowledge distillation (KD).", "For example, Edunov et al. (2019) and Conneau and Lample (2019) feed the top-layer representations of ELMo or BERT to NMT encoders.", "Yang et al. (2020) explore three techniques to apply BERT on NMT models, namely asymptotic distillation, dynamic switch for knowledge fusion, and rate-scheduled updating.", "Weng et al. (2020) propose a training framework consisting of a dynamic fusion mechanism and a continuous KD paradigm to leverage the knowledge of various PLMs.", "Baziotis et al. (2020) incorporate a language model prior for low-resource NMT.", "Chen et al. (2020) fine-tune the BERT on the parallel corpus to make it aware of source input, and then utilize it to improve the NMT model via KD over all target words.", "Compared to this, CBBGCA jointly optimizes the NMT model and auxiliary model, leading to better performance.", "Moreover, our method just selectively conducts KD on a portion of target words, giving higher distillation efficiency, which is similar to (Wang et al., 2021).", "Even though these PLM-based approaches have gained remarkable improvements, they unavoidably have some inherent limitations: (1) the monolingual PLMs lacks crucial bilingual information for translation; (2) the independence between PLM pre-trainings and NMT model training.", "In contrast, our proposed model is able to overcome these limitations.", "In this paper, we propose a CBBGCA training framework for the NMT model to effectively exploit target-side bidirectional global context with an auxiliary CMLM.", "The training consists of two stages.", "At the first stage, we introduce multi-task learning to benefit the NMT model by sharing its encoder with an auxiliary CMLM.", "Then, at the second stage, through confidence based knowledge distillation, we use the CMLM as teacher to especially refine the NMT model on unconfidently-predicted target words.", "Experimental results show that our framework can significantly improve the NMT model.", "Compared with previous work, neither external nor synthetic data are needed and only the NMT model is involved during inference.", "The project was supported by National Natural Science Foundation of China (No. 62036004, No. 61672440), Natural Science Foundation of Fujian Province of China (No. 2020J06001), and Youth Innovation Fund of Xiamen (No. 3502Z20206059).", "We also thank the reviewers for their insightful comments." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "method", "abstain", "objective", "abstain", "method", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "objective", "objective", "abstain", "objective", "method", "result", "abstain", "other", "other" ]
[ "Answering complex questions is a time-consuming activity for humans that requires reasoning and integration of information.", "Recent work on reading comprehension made headway in answering simple questions, but tackling complex questions is still an ongoing research challenge.", "Conversely, semantic parsers have been successful at handling compositionality, but only when the information resides in a target knowledge-base.", "In this paper, we present a novel framework for answering broad and complex questions, assuming answering simple questions is possible using a search engine and a reading comprehension model.", "We propose to decompose complex questions into a sequence of simple questions, and compute the final answer from the sequence of answers.", "To illustrate the viability of our approach, we create a new dataset of complex questions, COMPLEXWEBQUESTIONS , and present a model that decomposes questions and interacts with the web to compute an answer.", "We empirically demonstrate that question decomposition improves performance from 20.8 precision@1 to 27.5 preci-sion@1 on this new dataset.", "Humans often want to answer complex questions that require reasoning over multiple pieces of evidence, e.g., From what country is the winner of the Australian Open women's singles 2008? .", "Answering such questions in broad domains can be quite onerous for humans, because it requires searching and integrating information from multiple sources.", "Recently, interest in question answering (QA) has surged in the context of reading comprehension (RC), where an answer is sought for a question given one or more documents (Hermann et al., 2015; Joshi et al., 2017; Rajpurkar et al., 2016).", "q : What city is the birthplace of the author of Without end', and hosted Euro 2012?", "Neural models trained over large datasets led to great progress in RC, nearing human-level performance (Wang et al., 2017).", "However, analysis of models revealed (Jia and Liang, 2017; Chen et al., 2016) that they mostly excel at matching questions to local contexts, but struggle with questions that require reasoning.", "Moreover, RC assumes documents with the information relevant for the answer are available but when questions are complex, even retrieving the documents can be difficult.", "Conversely, work on QA through semantic parsing has focused primarily on compositionality: questions are translated to compositional programs that encode a sequence of actions for finding the answer in a knowledge-base (KB) (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Artzi and Zettlemoyer, 2013; Krishnamurthy and Mitchell, 2012; Kwiatkowski et al., 2013; Liang et al., 2011).", "However, this reliance on a manually-curated KB has limited the coverage and applicability of semantic parsers.", "In this paper we present a framework for QA that is broad , i.e., it does not assume information is in a KB or in retrieved documents, and compositional , i.e., to compute an answer we must perform some computation or reasoning.", "Our thesis is that answering simple questions can be achieved 641 by combining a search engine with a RC model.", "Thus, answering complex questions can be addressed by decomposing the question into a sequence of simple questions, and computing the answer from the corresponding answers.", "Figure 1 illustrates this idea.", "Our model decomposes the question in the figure into a sequence of simple questions, each is submitted to a search engine, and then an answer is extracted from the search result.", "Once all answers are gathered, a final answer can be computed using symbolic operations such as union and intersection.", "To evaluate our framework we need a dataset of complex questions that calls for reasoning over multiple pieces of information.", "Because an adequate dataset is missing, we created COMPLEXWEBQUESTIONS , a new dataset for complex questions that builds on WEBQUESTIONSSP, a dataset that includes pairs of simple questions and their corresponding SPARQL query.", "We take SPARQL queries from WEBQUESTIONSSP and automatically create more complex queries that include phenomena such as function composition, conjunctions, superlatives and comparatives.", "Then, we use Amazon Mechanical Turk (AMT) to generate natural language questions, and obtain a dataset of 34,689 question-answer pairs (and also SPARQL queries that our model ignores).", "Data analysis shows that examples are diverse and that AMT workers perform substantial paraphrasing of the original machine-generated question.", "We propose a model for answering complex questions through question decomposition.", "Our model uses a sequence-to-sequence architecture (Sutskever et al., 2014) to map utterances to short programs that indicate how to decompose the question and compose the retrieved answers.", "To obtain supervision for our model, we perform a noisy alignment from machine-generated questions to natural language questions and automatically generate noisy supervision for training.", "1 We evaluate our model on COMPLEXWEBQUESTIONS and find that question decomposition substantially improves precision@1 from 20.8 to 27.5.", "We find that humans are able to reach 63.0 precision@1 under a limited time bud-get, leaving ample room for improvement in future work.", "To summarize, our main contributions are: 1 We differ training from question-answer pairs for future work.", "1. A framework for answering complex questions through question decomposition.", "2. A sequence-to-sequence model for question decomposition that substantially improves performance.", "3. A dataset of 34,689 examples of complex and broad questions, along with answers, web snippets, and SPARQL queries.", "Our dataset, COMPLEXWEBQUESTIONS , can be downloaded from http://nlp.cs.tau.", "ac.il/compwebq and our codebase can be downloaded from https://github.com/ alontalmor/WebAsKB .", "Our goal is to learn a model that given a question q and a black box QA model for answering simple questions, SIMPQA ( ) , produces a computation tree t (defined below) that decomposes the question and computes the answer.", "The model is trained from a set of N question-computation tree pairs { q i , t i } Ni =1 or question-answer pairs { q i , a i } Ni =1 .", "A computation tree is a tree where leaves are labeled with strings, and inner nodes are labeled with functions.", "The arguments of a function are its children sub-trees.", "To compute an answer, or denotation , from a tree, we recursively apply the function at the root to its children.", "More formally, given a tree rooted at node t , labeled by the function f , that has children c 1 ( t ) , . . . , c k ( t ) , the denotation J t K = f ( J c 1 ( t ) K , . . . , J c k ( t ) K ) is an arbitrary function applied to the denotations of the root's children.", "Denotations are computed recursively and the denotation of a string at the leaf is the string itself, i.e., J l K = l .", "This is closely related to semantic functions in semantic parsing (Berant and Liang, 2015), except that we do not in-642 teract with a KB, but rather compute directly over the breadth of the web through a search engine.", "Figure 2 provides an example computation tree for our running example.", "Notice that words at the leaves are not necessarily in the original question, e.g., city is paraphrased to cities .", "More broadly, our framework allows paraphrasing questions in any way that is helpful for the function SIMPQA ( ) .", "Paraphrasing for better interaction with a QA model has been recently suggested by Buck et al. (2017) and Nogueira and Cho (2016).", "We defined the function SIMPQA ( ) for answering simple questions, but in fact it comprises two components in this work.", "First, the question is submitted to a search engine that retrieves a list of web snippets.", "Next, a RC model extracts the answer from the snippets.", "While it is possible to train the RC model jointly with question decomposition, in this work we pre-train it separately, and later treat it as a black box.", "Functions in our formal language take arguments and return values that can be strings (when decomposing or re-phrasing the question), sets of strings, or", "sets of numbers.", "Our set of functions includes:", "1. SIMPQA ( ) : Model for answering simple questions, which takes a string argument and returns a set of strings or numbers as answer.", "2. COMP ( , ) : This function takes a string containing one unique variable VAR , and a set of answers.", "E.g., in Figure 2 the first argument is birthplace of VAR , and the second argument is { KENFOLLETT , ADAMZAGAJEWSKI } .", "The function replaces the variable with each answer string representation and returns their union.", "Formally, COMP ( q, A ) = a A SIMPQA ( q/a ) , where q/a denotes the string produced when replacing VAR in q with a .", "This is similar to function composition in CCG (Steedman, 2000), or a join operation in -DCS (Liang, 2013), where the string is a function applied to previously-computed values.", "3. CONJ ( , ) : takes two sets and returns their intersection.", "Other set operations can be defined analogously.", "As syntactic sugar, we allow CONJ ( ) to take strings as input, which means that we run SIMPQA ( ) to obtain a set and then perform intersection.", "The root node in Figure 2 illustrates an application of CONJ", ".", "4. ADD ( , ) : takes two singleton sets of numbers and returns a set with their addition.", "Similar functions can be defined analogously.", "While we support mathematical operations, they were not required in our dataset.", "Other logical operations In semantic parsing superlative and comparative questions like What is the highest European mountain? or What European mountains are higher than Mont Blanc? are answered by joining the set of European mountains with their elevation.", "While we could add such functions to the formal language, answering such questions from the web is cumbersome: we would have to extract a list of entities and a numerical value for each.", "Instead, we handle such constructions using SIMPQA directly, assuming they are mentioned verbatim on some web document.", "Similarly, negation questions ( What countries are not in the OECD? ) are difficult to handle when working against a search engine only, as this is an open world setup and we do not hold a closed set of countries over which we can perform set subtraction.", "In future work, we plan to interface with tables (Pasupat and Liang, 2015) and KBs (Zhong et al., 2017).", "This will allow us to perform set operations over well-defined sets, and handle in a compositional manner superlatives and comparatives.", "Evaluating our framework requires a dataset of broad and complex questions that examine the importance of question decomposition.", "While many QA datasets have been developed recently (Yang et al., 2015; Rajpurkar et al., 2016; Hewlett et al., 2016; Nguyen et al., 2016; Onishi et al., 2016; Hill et al., 2015; Welbl et al., 2017), they lack a focus on the importance of question decomposition.", "Most RC datasets contain simple questions that can be answered from a short input document.", "Recently, TRIVIAQA (Joshi et al., 2017) presented a larger portion of complex questions, but still most do not require reasoning.", "Moreover, the focus of TRIVIAQA is on answer extraction from documents that are given.", "We, conversely, highlight question decomposition for finding the relevant documents.", "Put differently, RC is complementary to question decomposition and can be used as part of the implementation of SIMPQA.", "In Sec-643", "1. Seed Question", "2. SPARQL", "3. Machine-generated", "4. Natural language What movies have robert pattinson starred in?", "ns:rebert_pattinson ns:film.actor.film", "?c .", "?c ns:film.performance.film", "?x .", "?x ns:film.film.produced_by ns:erwin_stoff What movies have robert pattinson starred in and that was produced by Erwin Stoff?", "Which Robert Pattinson film was produced by Erwin Stoff?", "tion 6 we demonstrate that question decomposition is useful for two different RC approaches.", "To generate complex questions we use the dataset WEBQUESTIONSSP (Yih et al., 2016), which contains 4,737 questions paired with SPARQL queries for Freebase (Bollacker et al., 2008).", "Questions are broad but simple.", "Thus, we sample question-query pairs, automatically create more complex SPARQL queries, generate automatically questions that are understandable to AMT workers, and then have them paraphrase those into natural language (similar to Wang et al. (2015)).", "We compute answers by executing complex SPARQL queries against Freebase, and obtain broad and complex questions.", "Figure 3 provides an example for this procedure, and we elaborate next.", "Generating SPARQL queries Given a SPARQL query r , we create four types of more complex queries: conjunctions, superlatives, comparatives, and compositions.", "Table 1 gives the exact rules for generation.", "For conjunctions, superlatives, and comparatives, we identify queries in WEBQUESTIONSSP whose denotation is a set A , |A| 2 , and generate a new query r 0 whose denotation is a strict subset A 0 , A 0 A , A 0 6 = .", "For conjunctions this is done by traversing the KB and looking for SPARQL triplets that can be added and will yield a valid set A 0 .", "For comparatives and superlatives we find a numerical property common to all a A , and add a triplet and restrictor to r accordingly.", "For compositions, we find an entity e in r , and replace e with a variable y and add to r a triplet such that the denotation of that triplet is { e } .", "Machine-generated (MG) questions To have AMT workers paraphrase SPARQL queries into natural language, we need to present them in an understandable form.", "Therefore, we automatically generate a question they can paraphrase.", "When we generate new SPARQL queries, new predicates are added to the query (Table 1).", "We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.", "E.g., the template for ?", "x ns:book.author.works written obj is the author who wrote OBJ .", "For brevity, we provide the details in the supplementary material.", "Question Rephrasing We used AMT workers to paraphrase MG questions into natural language (NL).", "Each question was paraphrased by one AMT worker and validated by 1-2 other workers.", "To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.", "A total of 200 workers were involved, and 34,689 examples were produced with an average cost of 0.11$ per question.", "Table 1 gives an example for each compositionality type.", "A drawback of our method for generating data is that because queries are generated automatically the question distribution is artificial from a semantic perspective.", "Still, developing models that are capable of reasoning is an important direction for natural language understanding and COMPLEXWEBQUESTIONS provides an opportunity to develop and evaluate such models.", "To summarize, each of our examples contains a question, an answer, a SPARQL query (that our models ignore), and all web snippets harvested by our model when attempting to answer the question.", "This renders COMPLEXWEBQUESTIONS useful for both the RC and semantic parsing communities.", "COMPLEXWEBQUESTIONS builds on the WEBQUESTIONS (Berant et al., 2013).", "Questions in WEBQUESTIONS are usually about properties of entities ( What is the capital of France? ), often with some filter for the semantic type of the answer ( Which director , What city ).", "WEBQUESTIONS also contains questions that refer to events with multiple entities ( Who did Brad Pitt play in Troy? ).", "COMPLEXWEBQUESTIONS contains all these semantic phenomena, but we add four compositionality types by generating composition questions (45% of the times), conjunctions (45%), superlatives (5%) and comparatives (5%).", "Paraphrasing To generate rich paraphrases, we gave a bonus to workers that substantially modi-fied MG questions.", "To check whether this worked, we measured surface similarity between MG and NL questions, and examined the similarity.", "Using normalized edit-distance and the DICE coefficient, we found that NL questions are different from MG questions and that the similarity distribution has wide support (Figure 4).", "We created a heuristic for approximating the amount of word re-ordering performed by AMT workers.", "For every question, we constructed a matrix A , where A ij is the similarity between token i in the MG question and token j in the NL question.", "Similarity is 1 if lemmas match, or cosine similarity according to GloVe embeddings (Pen-nington et al., 2014), when above a threshold, and 0 otherwise.", "The matrix A allows us to estimate whether parts of the MG question were re-ordered when paraphrased to NL (details in supplementary material).", "We find that in 44.7% of the conjunction questions and 13.2% of the composition questions, word re-ordering happened, illustrating that substantial changes to the MG question have been made.", "Figure 5 illustrates the matrix A for a pair of questions with re-ordering.", "Qualitative analysis We randomly sampled 100 examples from the development set and manually identified prevalent phenomena in the data.", "We present these types in Table 2 along with their frequency.", "In 18% of the examples a conjunct in the MG question becomes a modifier of a wh-word in the NL question (WH-MODIFIER ).", "In 22% substantial word re-ordering of the MG questions occurred, and in 42% a minor word re-ordering occurred ( number of building floors is 50 paraphrased as has 50 floors ).", "AMT workers used a synonym in 54% of the examples, they omitted words in 27% of the examples and they added new lexical material in 29%.", "To obtain intuition for operations that will be useful in our model, we analyzed the 100 examples for the types of operations that should be applied to the NL question during question decomposition.", "We found that splitting the NL question is insufficient, and that in 53% of the cases a word in the NL question needs to be copied to multiple questions after decomposition (row 3 in Table 3).", "Moreover, words that did not appear in the MG question need to be added in 39% of the cases, and words need to be deleted in 28% of the examples.", "We would like to develop a model that translates questions into arbitrary computation trees with arbitrary text at the tree leaves.", "However, this requires training from denotations using methods such as maximum marginal likelihood or reinforcement learning (Guu et al., 2017) that are difficult to optimize.", "Moreover, such approaches involve issuing large amounts of queries to a search engine at training time, incurring high costs and slowing down training.", "Instead, we develop a simple approach in this paper.", "We consider a subset of all possible computation trees that allows us to automatically generate noisy full supervision.", "In what follows, we describe the subset of computation trees considered and their representation, a method for automatically generating noisy supervision, and a pointer network model for decoding.", "Representation We represent computation trees as a sequence of tokens, and consider trees with at most one compositional operation.", "We denote a sequence of question tokens q i : j = ( q i , . . . , q j ) , and the decoded sequence by z .", "We consider the following token sequences (see Table 3):", "1. SimpQA : The function SIMPQA is applied to the question q without paraphrasing.", "In prefix notation this is the tree SIMPQA ( q ) .", "2. Comp i j : This sequence of tokens corresponds to the following computation tree: COMP ( q 1: i 1 VAR q j +1: | q | , SIMPQA ( q i : j )) , where is the concatenation operator.", "This is used for questions where a substring is answered by SIMPQA and the answers replace a variable before computing a final answer.", "3. Conj i j : This sequence of tokens corresponds to the computation tree CONJ ( SIMPQA ( q 0: i 1 ) , SIMPQA ( q j q i : | q | )) .", "The idea is that conjunction can be answered by splitting the question in a single point, where one token is copied to the second part as well ( film in Table 3).", "If nothing needs to be copied, then j = 1 .", "This representation supports one compositional operation, and a single copying operation is allowed without any re-phrasing.", "In future work, we plan to develop a more general representation, which will require training from denotations.", "Supervision Training from denotations is difficult as it involves querying a search engine frequently, which is expensive.", "Therefore, we take advantage of the the original SPARQL queries and MG questions to generate noisy programs for composition and conjunction questions.", "Note that these noisy programs are only used as supervision to avoid the costly process of manual annotation, but the model itself does not assume SPARQL queries in any way.", "We generate noisy programs from SPARQL queries in the following manner: First, we automatically identify composition and conjunction questions.", "Because we generated the MG question, we can exactly identify the split points ( i, j in composition questions and i in conjunction questions) in the MG question.", "Then, we use a rule-based algorithm that takes the alignment matrix A (Section 4), and approximates the split points in the NL question and the index j to copy in conjunction questions.", "The red line in Figure 5 corresponds to the known split point in the MG question, and the blue one is the approximated split point in the NL question.", "The details of this rule-646 Program Question Split SimpQA What building in Vienna, Austria has 50 floors Comp 5 9 Where is the birthplace of the writer of Standup Shakespeare Where is the birthplace of VAR the writer of Standup Shakespeare Conj 5 1 What film featured Taylor Swift and What film featured Taylor Swift was directed by Deborah Aquila film and was directed by Deborah Aquila Table 3: Examples for the types of computation trees that can be decoded by our model.", "based algorithm are in the supplementary material.", "Thus, we obtain noisy supervision for all composition and conjunction questions and can train a model that translates questions q to representations z = z 1 z 2 z 3 , where z 1 { Comp , Conj } and z 2 , z 3 are integer indices.", "Pointer network The representation z points to indices in the input, and thus pointer networks (Vinyals et al., 2015) are a sensible choice.", "Because we also need to decode the tokens COMP and CONJ , we use augmented pointer networks, (Zhong et al., 2017): For every question q , an augmented question q is created by appending the tokens C OMPCONJ to q .", "This allows us to decode the representation z with one pointer network that at each decoding step points to one token in the augmented question.", "We encode q with a one-layer GRU (Cho et al., 2014), and decode z with a one-layer GRU with attention as in Jia and Liang (2016).", "The only difference is that we decode tokens from the augmented question q rather than from a fixed vocabulary.", "We train the model with token-level cross-entropy loss, minimizing P j log p ( z j | x, z 1: j 1 ) .", "Parameters include the GRU encoder and decoder, and embeddings for unknown tokens (that are not in pre-trained GloVe embeddings (Pen-nington et al., 2014)).", "The trained model decodes COMP and CONJ representations, but sometimes using SIMPQA ( q ) without decomposition is better.", "To handle such cases we do the following: We assume that we always have access to a score for every answer, provided by the final invocation of SIMPQA (in CONJ questions this score is the maximum of the scores given by SIMPQA for the two conjuncts), and use the following rule to decide if to use the decoded representation z or SIMPQA ( q ) .", "Given the scores for answers given by z and the scores given by SIMPQA ( q ) , we return the single answer that has the highest score.", "The intuition is that the confidence provided by the scores of SIMPQA is correlated with answer correctness.", "In future work we will train directly from denotations and will handle all logical functions in a uniform manner.", "In this section, we aim to examine whether question decomposition can empirically improve performance of QA models over complex questions.", "Experimental setup We used 80% of the examples in COMPLEXWEBQUESTIONS for training, 10% for development, and 10% for test, training the pointer network on 24,708 composition and conjunction examples.", "The hidden state dimension of the pointer network is 512 , and we used Adagrad (Duchi et al., 2010) combined with L 2 regularization and a dropout rate of 0 .", "25 .", "We initialize 50 -dimensional word embeddings using GloVe and learn embeddings for missing words.", "Simple QA model As our SIMPQA function, we download the web-based QA model of Talmor et al. (2017).", "This model sends the question to Google's search engine and extracts a distribution over answers from the top100 web snippets using manually-engineered features.", "We re-train the model on our data with one new feature: for every question q and candidate answer mention in a snippet, we run RASOR, a RC model by lee et al. (2016), and add the output logit score as a feature.", "We found that combining the web-facing model of Talmor et al. (2017) and RASOR, resulted in improved performance.", "Evaluation For evaluation, we measure preci-sion@1 (p@1), i.e., whether the highest scoring answer returned string-matches one of the correct answers (while answers are sets, 70% of the questions have a single answer, and the average size of the answer set is 2.3).", "We evaluate the following models and oracles:", "1. SIMPQA: running SIMPQA on the entire question, i.e., without decomposition.", "2. SPLITQA: Our main model that answers complex questions by decomposition.", "3. SPLITQAORACLE : An oracle model that chooses whether to perform question decom-647 System Dev.", "position or use SIMPQA in hindsight based on what performs better.", "4. RCQA: This is identical to SIMPQA, except that we replace the RC model from Talmor et al. (2017) with the the RC model DOCQA (Clark and Gardner, 2017), whose performance is comparable to state-of-the-art on TRIVIAQA.", "5. SPLITRCQA: This is identical to SPLITQA, except that we replace the RC model from Talmor et al. (2017) with DOCQA.", "6. GOOGLEBOX : We sample 100 random development set questions and check whether Google returns a box that contains one of the correct answers.", "7. HUMAN : We sample 100 random development set questions and manually answer the questions with Google's search engine, including all available information.", "We limit the amount of time allowed for answering to 4 minutes.", "Table 4 presents the results on the development and test sets.", "SIMPQA, which does not decompose questions obtained 20.8 p@1, while by performing question decomposition we substantially improve performance to 27.5 p@1.", "An upper bound with perfect knowledge on when to decompose is given by SPLITQAORACLE at 33.7 p@1.", "RCQA obtained lower performance SIMPQA, as it was trained on data from a different distribution.", "More importantly SPLITRCQA outperforms RCQA by 3.4 points, illustrating that this RC model also benefits from question decomposition, despite the fact that it was not created with question decomposition in mind.", "This shows the importance of question decomposition for retrieving documents from which an RC model can extract answers.", "GOOGLEBOX finds a correct answer in 2.5% of the cases, showing that complex questions are challenging for search engines.", "To conclude, we demonstrated that question decomposition substantially improves performance on answering complex questions using two inde-pendent RC models.", "Analysis We estimate human performance (HUMAN ) at 63.0 p@1.", "We find that answering complex questions takes roughly 1.3 minutes on average.", "For questions we were unable to answer, we found that in 27% the answer was correct but exact string match with the gold answers failed; in 23.1% the time required to compute the answer was beyond our capabilities; for 15.4% we could not find an answer on the web; 11.5% were of ambiguous nature; 11.5% involved paraphrasing errors of AMT workers; and an additional 11.5% did not contain a correct gold answer.", "SPLITQA decides if to decompose questions or not based on the confidence of SIMPQA.", "In 61% of the questions the model chooses to decompose the question, and in the rest it sends the question as-is to the search engine.", "If one of the strategies (decomposition vs. no decomposition) works, our model chooses that right one in 86% of the cases.", "Moreover, in 71% of these answerable questions, only one strategy yields a correct answer.", "We evaluate the ability of the pointer network to mimic our labeling heuristic on the development set.", "We find that the model outputs the exact correct output sequence 60.9% of the time, and allowing errors of one word to the left and right (this often does not change the final output) accuracy is at 77.1%.", "Token-level accuracy is 83.0% and allowing one-word errors 89.7%.", "This shows that SPLITQA learned to identify decomposition points in the questions.", "We also observed that often SPLITQA produced decomposition points that are better than the heuristic, e.g., for What is the place of birth for the lyricist of Roman Holiday , SPLITQA produced the lyricist of Roman Holiday , but the heuristic produced the place of birth for the lyricist of Roman Holiday .", "Additional examples of SPLITQA question decompositions are provided in Table", "5. ComplexQuestions To further examine the ability of web-based QA models, we run an experiment against COMPLEXQUESTIONS (Bao et al., 2016), a small dataset of question-answer pairs designed for semantic parsing against Freebase.", "We ran SIMPQA on this dataset (Table 6) and obtained 38.6 F 1 (the official metric), slightly lower than COMPQ, the best system, which op-648 Question Split-1 Split-2 Find the actress who played Hailey Rogers, the actress who played Hailey Rogers Find VAR , what label is she signed to what label is she signed to What are the colors of the sports team whose the sports team whose arena stadium What are the colors of VAR arena stadium is the AT&T Stadium is the AT&T Stadium What amusement park is located in Madrid What amusement park is located in park includes the stunt fall ride Spain and includes the stunt fall ride Madrid Spain and Which university whose mascot is Which university whose mascot is university Derek Fisher attend The Trojan did Derek Fisher attend The Trojan did Table 5: Examples for question decompositions from SPLITQA.", "erates directly against Freebase.", "2 By analyzing the training data, we found that we can decompose COMP questions with a rule that splits the question when the words when or during appear, e.g., Who was vice president when JFK was president? .", "3 We decomposed questions with this rule and obtained 39.7 F 1 (SPLITQARULE ).", "Analyzing the development set errors, we found that occasionally SPLITQARULE returns a correct answer that fails to string-match with the gold answer.", "By manually fixing these cases, our development set F 1 reaches 46.9 (SPLITQARULE ++).", "Note that COMPQ does not suffer from any string matching issue, as it operates directly against the Freebase KB and thus is guaranteed to output the answer in the correct form.", "This short experiment shows that a web-based QA model can rival a semantic parser that works against a KB, and that simple question decomposition is beneficial and leads to results comparable to state-of-the-art.", "This work is related to a body of work in semantic parsing and RC, in particular to datasets that focus on complex questions such as TRIVIAQA (Joshi et al., 2017), WIKIHOP (Welbl et al., 2017) and RACE (Lai et al., 2017).", "Our distinction is in proposing a framework for complex QA that focuses on question decomposition.", "Our work is related to Chen et al. (2017) and Watanabe et al. (2017), who combined retrieval and answer extraction on a large set of documents.", "We work against the entire web, and propose ques-2 By adding the output logit from RASOR , we improved test F 1 from 32.6, as reported by Talmor et al. (2017), to 38.6.", "tion decomposition for finding information.", "This work is also closely related to Dunn et al. (2017) and Buck et al. (2017): we start with questions directly and do not assume documents are given.", "Buck et al. (2017) also learn to phrase questions given a black box QA model, but while they focus on paraphrasing, we address decomposition.", "Another important related research direction is Iyyer et al. (2016), who answered complex questions by decomposing them.", "However, they used crowdsourcing to obtain direct supervision for the gold decomposition, while we do not assume such supervision.", "Moreover, they work against web tables, while we interact with a search engine against the entire web.", "In this paper we propose a new framework for answering complex questions that is based on question decomposition and interaction with the web.", "We develop a model under this framework and demonstrate it improves complex QA performance on two datasets and using two RC models.", "We also release a new dataset, COMPLEXWEBQUESTIONS , including questions, SPARQL programs, answers, and web snippets harvested by our model.", "We believe this dataset will serve the QA and semantic parsing communities, drive research on compositionality, and push the community to work on holistic solutions for QA.", "In future work, we plan to train our model directly from weak supervision, i.e., denotations, and to extract information not only from the web, but also from structured information sources such as web tables and KBs.", "We thank Jonatahn Herzig, Ni Lao, and the anonymous reviewers for their constructive feedback.", "This work was supported by the Samsung runway project and the Israel Science Foundation, grant 942/16." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "method", "objective", "method", "result", "abstain", "objective", "method", "result", "result", "result", "objective", "objective", "objective", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "other", "method", "method", "other", "abstain", "abstain", "objective", "objective", "objective", "method", "method", "other", "other" ]
[ "Unsupervised bilingual lexicon induction is the task of inducing word translations from monolingual corpora of two languages.", "Recent methods are mostly based on unsupervised cross-lingual word embeddings, the key to which is to find initial solutions of word translations, followed by the learning and refinement of mappings between the embedding spaces of two languages.", "However, previous methods find initial solutions just based on word-level information, which may be (1) limited and inaccurate, and (2) prone to contain some noise introduced by the insufficiently pre-trained embeddings of some words.", "To deal with those issues, in this paper, we propose a novel graph-based paradigm to induce bilingual lexicons in a coarse-to-fine way.", "We first build a graph for each language with its vertices representing different words.", "Then we extract word cliques from the graphs and map the cliques of two languages.", "Based on that, we induce the initial word translation solution with the central words of the aligned cliques.", "This coarse-to-fine approach not only leverages clique-level information, which is richer and more accurate, but also effectively reduces the bad effect of the noise in the pre-trained embeddings.", "Finally, we take the initial solution as the seed to learn cross-lingual embeddings, from which we induce bilingual lexicons.", "Experiments show that our approach improves the performance of bilingual lexicon induction compared with previous methods.", "Bilingual lexicon induction (BLI) is an important task of machine translation and becomes an essential part of recent unsupervised machine translation approaches (Lample et al., 2018; Artetxe et al., 2018c; Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019).", "Previous methods for BLI are Contribution during internship at MSRA.", "mostly based on unsupervised cross-lingual word embeddings (Zhang et al., 2017; Artetxe et al., 2017; Conneau et al., 2017; Artetxe et al., 2018b; Xu et al., 2018; Hoshen and Wolf, 2018; Alvarez-Melis and Jaakkola, 2018), the goal of which is to find a mapping function, typically a linear transformation (Mikolov et al., 2013), to map the source embeddings into the target embedding spaces.", "To do this, they first build a seed dictionary (known as the initial solution) with different methods and then learn the optimal mapping function that fits the seed dictionary.", "Based on the mapping function, a new dictionary of higher quality is inferred from the cross-lingual word embeddings by finding nearest neighbors in the target embedding space.", "With the new dictionary, the mapping function is further refined to fit it.", "The inference of the dictionary and the refinement of the mapping function are iteratively done until the final convergence.", "During the whole procedure, the initialization stage is important and heavily focused in previous work.", "Previous methods for finding the initial solution fall into three categories.", "The first one is heuristic rules such as treating identical words as the seed (Artetxe et al., 2017), but this kind of method is restricted to languages sharing the alphabet.", "The second category is adversarial methods (Zhang et al., 2017; Conneau et al., 2017; Xu et al., 2018; Alvarez-Melis and Jaakkola, 2018), but suffering from the drawbacks of generative adversarial models, i.e., the sensitivity of hyper-parameters, long training time, etc.", "The third category is structure-based methods (Artetxe et al., 2018b; Hoshen and Wolf, 2018), which is more flexible and robust than other categories, and achieve the state-of-the-art BLI performance.", "In Artetxe et al. (2018b), they first compute a similarity matrix of all words in the vocabulary, and then represent each word with the distribution of the similarity values, while in Hoshen and Wolf (2018), they project the word vectors to the top 50 principal components of the embedding spaces.", "After that, both of them directly use the word representation of two languages to retrieve the initial bilingual lexicons by computing the cosine distances of source and target word representations.", "However, directly finding word alignments from scratch has some demerits.", "(1) The information that a word can provide is limited and independent of each other.", "(2) According to our observation, there is some noise in the pre-trained embeddings even for high-frequency words so that the initial word alignments derived from them are not accurate.", "Those mistakes in the initial word-level alignments can hurt the performance in the following iteration steps.", "To solve those issues, we propose a novel graph-based coarse-to-fine paradigm to generate initial solutions for learning cross-lingual word embeddings, from which we induce bilingual lexicons.", "Specifically, given source and target languages, our method first uses pre-trained monolingual embeddings to construct a graph for each language, with the vertices representing different words, so that the mutual relationship between words is preserved.", "Next, we use the BronKerbosch algorithm (Akkoyunlu, 1973) to extract cliques (a subset of vertices in which every two distinct vertices are adjacent) in the source and target graphs.", "After that, we calculate the clique embeddings and map the cliques from two graphs.", "We then treat the central words of the aligned cliques as the seeds to learn the mapping of the two word embedding spaces.", "Our contributions are threefold.", "(1) By building word graphs, we leverage the clique-level information extracted from them.", "The cliques cluster similar words and assemble their mutual relationship of them, providing richer and more accurate information.", "(2) We propose the coarse(clique extraction)-to-fine(seed induction) procedure for the BLI task, which effectively reduces the bad effect of the noise in the pre-trained embeddings; (3) We improve the BLI performance on the MUSE dataset with our method, even compared with strong baselines.", "Unsupervised bilingual lexicon induction (BLI) is the task of inducing word translations from monolingual corpora of two languages.", "Recently proposed methods follow the same procedure, i.e., first learning cross-lingual embeddings in an unsupervised way ( 2.1) and then inducing bilingual lexicons from the embedding spaces ( 2.2).", "Previous methods for learning cross-lingual embeddings can be roughly divided into two categories (Ormazabal et al., 2019), i.e., mapping methods and joint learning methods.", "As the second category, the skip-gram (Luong et al., 2015) for example, requires bilingual corpus during training, current methods for unsupervised cross-lingual embeddings mainly fall into the first category.", "Given pre-trained monolingual embeddings of two languages, the mapping methods try to map the source and target embedding spaces through a linear transformation (Mikolov et al., 2013) W M d d ( R ) , where M d d ( R ) is the space of d d matrices of real numbers and d is the dimension of the embeddings.", "Based on that, Xing et al. (2015) propose to constrain W to be orthogonal, i.e., W (cid:62) W = I , and Conneau et al. (2017) find this is a Procrustes problem which advantageously offers a closed-form solution obtained from singular value decomposition (SVD) of YX (cid:62) as follows: W = arg min W || WX Y || F = UV (cid:62) , with UV (cid:62) = SVD ( YX (cid:62) ) (1) where X and Y M d n ( R ) consist of the embeddings of the bilingual lexicons { x i , y i } ni =1 in the seed dictionary.", "Therefore, there are two steps to learn unsupervised cross-lingual embeddings.", "The first step is to find an initial solution (also known as the seed dictionary), and the second one is to obtain the desired W according to Eq.", "(1).", "The above two steps can be iteratively done, by inducing new seed dictionary from the learned cross-lingual embeddings with the method introduced next, and using the new dictionary to refine the matrix W (known as the refinement process in some literature).", "The first step, i.e., finding the initial solution, is crucial because it decides the direction of the following iteration.", "Loads of previous work are devoted to finding good initial solutions with different methods, as is described in", "1. But their methods only exploit word-level information, which is limited and may be inaccurate due to the noise in pre-trained monolingual embeddings, leading to mistakes in the initial word-level alignments.", "Therefore, we propose a novel graph-based coarse-to-fine paradigm to find the initial solution of higher quality, leveraging clique-level information which we think is richer and more accurate.", "Based on the learned cross-lingual embeddings, bilingual lexicons can be induced from the mapped spaces via the nearest neighbor (NN) method by calculating the cosine distance of the mapped source embeddings and the target embeddings.", "However, this method suffers from the hubness problem (Dinu et al., 2014) such that some target words appear as the nearest neighbors of many source words.", "To mitigate this problem, alternatives of the distance function have been proposed, such as invsoftmax (Smith et al., 2017), CSLS (Conneau et al., 2017) and margin-based scores (Artetxe and Schwenk, 2018).", "Among them, CSLS, as a special case of margin-based scores, is widely used in the SOTA embedding-based BLI methods.", "Formally, CSLS calculates the distance between the mapped and the target embeddings as follows: CSLS( Wx , y ) = 2 cos( Wx , y ) r T ( Wx ) r S ( y ) (2) where r T ( Wx ) = 1 K (cid:88) y N T ( Wx ) cos( Wx , y ) (3) is the mean similarity of a source embedding x to its K target neighborhoods ( NT ( Wx ) ).", "Similarly, r S ( y ) is the mean similarity of a target embedding y to its neighborhoods.", "As is mentioned before, recent work on bilingual lexicon induction (BLI) is mostly based on unsupervised cross-lingual embeddings, whose key point is to find initial solutions to learn the mapping function.", "However, previous methods find initial solutions just based on word-level information, which may be limited and inaccurate due to the noise in pre-trained monolingual embeddings.", "Therefore, we exploit the information provided by word cliques and figure out a coarse-to-fine procedure to denoise and find the initial solution of higher quality.", "Based on that, we learn the cross-lingual embeddings and induce word translations.", "As shown in Figure 1, our method for BLI can be roughly divided into several steps.", "Given the source and target languages, we first build a graph for each language.", "The graph vertex represents the word.", "Next, we extract word cliques from the graphs and map the cliques of two languages in an unsupervised way.", "Then, we induce the seed dictionary from the bilingual cliques by choosing the respective central words of the aligned cliques.", "After that, we learn cross-lingual embeddings with the help of the induced seed dictionary.", "The above steps can be iteratively done until the final convergence.", "By building word graphs, we can use the clique-level information which is richer and more accurate than what a single word provides.", "Besides, the whole coarse-to-fine procedure also reduces the bad effect of the noise in the pre-trained embeddings, because the clique-level alignment (coarse) is more accurate at the beginning and the word alignments inferred from it (fine) are more reasonable.", "We will next introduce each step.", "Given the pre-trained monolingual embeddings, we can derive an edge-weighted graph from them by regarding words as the vertices and their similarities as edges.", "Formally, the graph is G = < V, E > (4) where V is the vertex set (vocabulary of each language) and E is the edge set.", "The edges are built with monolingual embedding similarities.", "For example, for language x , to define the edges, we first get the word-to-word similarity matrix M with M i,j = (cid:40) CSLS( x i , x j ) , i (cid:54) = j 0 , i = j (5) where x i and x j are the normalized embeddings of two words respectively.", "We set the main diagonal elements to zero to avoid self-loop.", "Theoretically, there is one edge between any two arbitrary words with the edge weight to be M i,j , but if the weight of an edge is too small, it will provide little information and introduce a lot of noise.", "Therefore, we prune these non-informative edges with M i,j less than a threshold of .", "Meanwhile, the pruning greatly reduces the computation time of the next step.", "We build two graphs G x and G y for two languages x and y in this way respectively.", "Different from previous methods, we infer the initial solution not using word-level information but from word cliques, which we think is richer and more accurate.", "Following Wang et al. (2016), the Figure 1: Overview of our method.", "clique here means a maximum complete subgraph where every two distinct vertices in the clique are adjacent.", "Extracting cliques from a given graph is a nontrivial problem and is shown to be NP-complete (Karp, 1972).", "In this paper, we adopt Bron-Kerbosch (BK) algorithm (Akkoyunlu, 1973) with pivoting (Johnston, 1976) to extract the cliques from a given graph.", "Having extracted the word cliques of two languages, we calculate clique embeddings by averaging the embedding vectors of all words in each clique.", "We choose the word whose embedding is closest to its clique embedding as the central word of each clique.", "After that, we follow Artetxe et al. (2018b) to map the cliques of two languages in a fully unsupervised way, i.e. to learn cross-lingual clique embeddings.", "We use the clique extraction rather than clustering methods because (1) a word may fall into different categories because of polysemy, which can be well modeled by the cliques, and (2) the BK algorithm is much more efficient than clustering.", "3.2 maps the clique embeddings of two languages into the same space so that we can retrieve aligned cliques.", "For each source clique, we choose the nearest target clique according to the CSLS similarity score calculated by Eq.", "(2).", "Remember that we have chosen the central word for each clique after the clique extraction in 3.2, so the seed dictionary inferring process is simply picking the central words of the aligned cliques just as shown in Figure", "1. Note that we remove the duplication of seed word pairs in this process.", "Based on the initial solution (known as the seed dictionary), we then learn cross-lingual word embeddings following the Procrustes and refinement", "process introduced in 2.1.", "After obtaining the learned cross-lingual word embeddings, we rebuild the word graphs with the help of them and iterate the whole process again until the final convergence as shown in Figure", "1. Previously methods used a single matrix W as transformation function between the embedding spaces of two languages, based on the assumption that the embedding spaces of different languages are isomorphic (Mikolov et al., 2013).", "However, this is doubtful because the isomorphic assumption may not hold all the time (Sgaard et al., 2018).", "Fortunately, the cliques we extracted naturally provide good local features for us, because they are usually much different from each other in meanings, which enables us to investigate alternatives to a single mapping matrix W .", "Therefore, after the final iteration, we divide all the cliques into K groups via clustering, i.e., { L i } Ki =1 , and train an individual matrix W i for each of them.", "We denote this process as group mapping .", "Each W i is initialized with the learned W and fine-tuned as W i = arg min W i || W i X i Y i || F , s .", "t .", "W (cid:62) i W i = I (6) where X i and Y i are the embedding matrices of words belonging to L i .", "We divide each word into the group closest to its word embedding.", "The whole training procedure is shown in Algorithm", "1. 3.5 Inference After the training, we can obtain the renewed word graphs of both languages as well as their cliques, and get a set of group mapping matrices { W i } ki =1 .", "During the inference, for each source word x , we first find its closest clique C s by calculating the similarities of x 's embeddings to all clique embeddings.", "Next, we retrieve the group L s that C s belongs to, and choose the corresponding W s .", "Then, Algorithm 1: Training procedure of the proposed graph-based coarse-to-fine method.", "Bilingual lexicon induction (BLI) measures the word translation accuracy in comparison to a gold standard.", "We report results on the widely used MUSE dataset (Conneau et al., 2017).", "This dataset consists of monolingual fastText (Bojanowski et al., 2017) embeddings of many languages and dictionaries for many language pairs divided into training and test sets.", "The evaluation follows the setups of Conneau et al. (2017).", "We choose the top 10,000 word embeddings to build word graph because the monolingual embeddings of low-frequency words may be trained insufficiently.", "The embeddings are normalized following Artetxe et al. (2018b).", "Specifically, we first apply length normalization to the embeddings, and then mean center each dimension.", "After that, we do length normalization again to ensure the word embeddings have a unit length.", "An efficient algorithm for clique extraction is the Bron-Kerbosch (BK) algorithm, which is a recursive backtracking algorithm that searches for all maximal cliques in a given graph G .", "The pruning operation described in 3.1 makes the word graph a sparse graph, for which the BK algorithm can be made to run in time O ( dn 3 d/ 3 ) (Eppstein and Strash, 2011), where n is the number of vertexes in G , and d is the degeneracy 1 of the graph.", "We choose a public efficient C implementation of BK algorithm 2 , and only extract the cliques that contain no less than three words.", "According to our observation, the cliques can be extracted within several seconds with this code.", "In our experiment, the clique embeddings of two languages are mapped with the method proposed by Artetxe et al. (2018b).", "We use their public code to finish this step.", "We initialized W with a random orthogonal matrix.", "After building the seed dictionary, we first solve the Procrustes problem (Eq.", "(1)), followed by the refinement process.", "We choose several supervised and unsupervised methods to be our baselines.", "The supervised baselines include: (1) The iterative Procrustes method proposed by Smith et al. (2017); (2) The multi-step framework proposed by Artetxe et al. (2018a); (3) a geometric method proposed by Jawanpuria et al. (2019).", "The unsupervised baselines include (1) MUSE proposed by Conneau et al. (2017), which is a GAN based method followed by a refinement process; (2) a Wasserstein GAN based method combined with distribution matching and back translation, proposed by Xu et al. (2018); (3) a method proposed by Alvarez-Melis and Jaakkola (2018) that views the mapping problem as optimal transportation and optimize the Gromov-Wasserstein distance between embedding spaces; (4) A robust self-learning method proposed by Artetxe et al. (2018b), which leverages the intra-linguistic word similarity information to infer initial solutions, followed by a self-learning iteration; (5) A non-adversarial method proposed by Hoshen and Wolf 1 In graph theory, a k-degenerate graph is an undirected graph in which every subgraph has a vertex of degree k 2 https://github.com/aaronmcdaid/MaximalCliques Method en-fr en-de en-es en-it en-ru en-zh Supervised (Smith et al., 2017) 81.1 82.4 73.5 72.4 81.4 82.9 43.1 38.0 51.7 63.7 42.7 36.7 (Artetxe et al., 2018a) 80.5 83.1 73.5 73.5 80.5 83.8 61.3 39.6 50.5 67.3 32.3 43.4 (Joulin et al., 2018) 83.3 84.1 79.1 76.3 84.1 86.3 -57.9 67.2 45.9 46.4 (Jawanpuria et al., 2019) 82.1 84.2 74.9 76.7 81.9 85.5 -52.8 67.6 49.1 45.3 Unsupervised (Conneau et al., 2017) 82.3 81.1 74.0 72.2 81.7 83.3 77.4 76.1 44.0 59.1 32.5 31.4 (Xu et al., 2018) 77.9 75.5 69.3 67.0 79.5 77.8 72.6 73.4 --(Alvarez-Melis and Jaakkola, 2018) 81.3 78.9 71.9 72.8 81.7 80.4 78.9 75.2 45.1 43.7 -(Artetxe et al., 2018b) 82.3 83.6 75.1 74.3 82.3 84.7 78.8 79.5 49.2 65.6 -(Hoshen and Wolf, 2018) 82.3 84.1 74.7 73.0 82.1 84.1 77.9 77.5 47.5 61.8 -Ours (without GM) 82.7 83.4 75.5 75.7 82.6 84.8 78.6 79.5 48.9 63.9 38.1 35.2 Ours (with GM) 82.9 83.9 75.3 76.1 82.9 85.3 79.1 79.9 49.7 64.7 38.9 35.9 Table 1: Precision@1 for the MUSE BLI task.", "We report the result of the BLI task on the MUSE dataset (Conneau et al., 2017).", "The language pairs we choose are French (fr), German (de), Spanish (es), Italian (it), Russian (ru), Chinese (zh) from and to English(en), as shown in Table", "1. From Table 1, we find that our proposed method significantly outperforms previous methods on nearly all directions, especially on en-de and en-zh pairs, with the improvements of 2 to 6 points compared with previous state-of-the-art unsupervised approaches.", "The results on some language pairs such as en-fr, en-de and en-es are remarkably competitive with strong supervised methods.", "We also see that for distant languages, i.e., en-ru and en-zh, our method achieves good results, on which some unsupervised baselines fail to converge.", "However, the results are still far lagging behind the supervised methods, indicating that the seed dictionaries built with our method may not be perfect for these distant languages.", "This may root in the original diversified training data of the monolingual embeddings on those pairs.", "Even so, we still significantly outperforms the MUSE (Conneau et al., 2017) for the en-ru and en-zh pairs.", "We also list results of some morphologically rich languages, i.e., Finnish (fi), Polish (pl) and Turkish (tr) in Table 2, which are selected by Sgaard et al. (2018).", "They find that these languages are differ-Method en-fi en-pl en-tr Supervised", "ent in morphological traits from commonly benchmarked languages which are morphological poor isolating or exclusively concatenating languages.", "For these languages, Sgaard et al. (2018) leverage identical tokens in both languages as the seeds (Artetxe et al., 2017), followed by the Procrustes solution plus the refinement process, which generates relatively good results.", "We compare our results with the supervised method, i.e., use 5k dictionary to start up followed by Procrustes + refinement, MUSE (Conneau et al., 2017) and Sgaard et al. (2018) on these languages.", "From the table, we see that the GAN-based method (MUSE) fails to give good results of some directions, maybe due to its unstable training.", "Using identical tokens as the seed gives good results (Sgaard et al., 2018) and compares with the supervised method.", "Our method performs well on these morphologically rich languages, and even outperforms the supervised method.", "We also conduct experiments on other morphologically rich languages such as Estonian, Greek, and Hungarian, but fail to converge.", "From Table 1 and Table 2, we also find that leveraging the group mapping (GM, 3.4) contributes to bilingual lexicon induction, especially for some distant languages such as en-ru, en-zh, and morphologically rich languages, with the improvement from 0.7 to 1.2 points.", "This result indicates the assumption that the embedding spaces of different languages are isomorphic may only hold locally.", "With the help of the cliques we extracted, we can find those locality features via clustering.", "Notice that our method depends on three major hyper-parameters: (1) the number of words N we use to build word graphs; (2) the threshold to prune the edges in the graphs; (3) the number of iterations I we do.", "In this subsection, we discuss the impact of these hyper-parameters on the BLI results, taking en2fr as an example.", "We depict the precision@1 on different hyper-parameter settings in Figure", "2. Figure 2: Influence of the hyper-parameters.", "From the figure, we find that the performance of our method is sensitive to the choice of N and .", "If N is too small, the cliques extracted cannot reach agreement semantically across different languages because of the sparsity of semantic units.", "If N is too large, the improperly trained low-frequency word vectors will impair the performance too.", "As for , if the threshold is too small, then much noise will be introduced into the word graphs, not only reducing the quality of extracted cliques but increasing the execution time of the BK algorithm.", "For I , we find that the performance improves fast when I is increased from 0 to 2, but reaches convergence at 5.", "Too many iterations hurt the performance because, at this time, the seed dictionary inferred from the mapped cliques is redundant.", "It has been shown that BLI can benefit unsupervised machine translation (MT) (Lample et al., 2018; Marie and Fujita, 2018; Ren et al., 2019) by building Statistical Machine Translation (SMT) with the induced bilingual lexicons and language models as SMT features, followed by an iterative back-translation process.", "In this part, we will discuss the influence of different bilingual lexicon induction methods (Conneau et al., 2017; Artetxe et al., 2018b) to the performance of the initial SMT model, and report the BLEU scores 3 on new-stest2014 en-fr and en-de tasks in Table 3.", "Note that we do not do the subsequent iterative back-translation process.", "From the table, we see that the performance of unsupervised SMT is restricted to the quality of BLI results.", "As our method provides better word translations, the initial SMT models benefit from ours accordingly.", "In this part, we give some examples of the English cliques extracted with our method, as listed in Table 5.", "From the table, we see that our method can extract reasonable cliques containing words that share similar meanings.", "Each clique can be regarded as a semantic unit, which is more explicit than the PCA-based initialization method (Hoshen and Wolf, 2018) where they represent the semantic units with a fixed number of principal components.", "An interesting phenomenon is that May is not in the fifth clique which groups all the words of months.", "This is because, in this dataset, all the words are lower-cased so that may is also a modal verb.", "Besides, we observe the extracted cliques of other languages and find they are also reasonable, which are not listed here due to space limitation.", "To demonstrate that our method can produce good initial solutions for learning cross-lingual embeddings, in this part, we give an example of the seed dictionary inferred during the first iteration with our method, compared with that inferred by MUSE (Conneau et al., 2017) and VecMap (Artetxe et al., 2018b).", "The language pairs we choose are en-fr and en-zh, as listed in Table 4.", "From the table, we find that our method produces initial solutions with higher quality.", "This is because our coarse-to-fine process can effectively filter out the noise from the start.", "Notice that the initial solution produced by MUSE in the first iteration is not good, which may be because the GAN based method is not stable enough at the beginning of the training.", "Bilingual lexicon induction (BLI) is an important task of machine translation.", "Recent methods for bilingual lexicon induction are mostly based on unsupervised cross-lingual word embeddings (Zhang et al., 2017; Artetxe et al., 2017; Conneau et al., 2017; Artetxe et al., 2018b; Xu et al., 2018; Hoshen and Wolf, 2018; Alvarez-Melis and Jaakkola, 2018).", "They follow the same procedure that is first building initial solutions (a seed dictionary) and then learning a mapping function between the two word embedding spaces.", "During inference, for a given source word, they find the target word via the nearest neighbors search by calculating the distance of the mapped source embedding and all target word embeddings.", "The main focus of the previous methods is how to find the initial solution, which is the most important part.", "Their methods can be divided into three categories according to the way of finding the initial solution.", "The first category is using heuristic rules such as treating identical words as the seed (Artetxe et al., 2017), but this kind of method is restricted to languages sharing the vocabulary or at least the notation of numbers.", "The second category is adversarial methods (Zhang et al., 2017; Conneau et al., 2017; Xu et al., 2018; Alvarez-Melis and Jaakkola, 2018).", "They train a generator to finish mapping between the two word embedding spaces, and a discriminator to distinguish the mapped embeddings from the target embeddings.", "However, they suffer from the drawbacks of generative adversarial models, i.e., the sensitivity of hyper-parameters, long training time and lack of interpretability (Hoshen and Wolf, 2018).", "The third category is structure-based methods, which achieve the state-of-the-art performance on BLI.", "They either leverage the intra-linguistic word similarity information (Artetxe et al., 2018b) or principal components of monolingual word embeddings (Hoshen and Wolf, 2018), but their methods infer initial solutions just based on word-level information which is limited and prone to contain some noise due to the insufficient training of pre-trained embeddings.", "Different from their methods, ours leverages clique-level information which is richer and more accurate, and uses a coarse-to-fine procedure to reduce the adverse effect of the noise mentioned above.", "In this paper, we propose a novel graph-based coarse-to-fine paradigm for unsupervised bilingual", "lexicon induction.", "Our method uses clique-level information and reduces the bad effect of noise in the pre-trained embeddings.", "The experiments show that our method can significantly improve the bilingual word induction performance after several iterations compared with strong baselines, even for distant language pairs.", "In the future, we will consider combining our method with Graph Neural Networks to update the word graphs we build.", "This work is R&D Program of 61925203 & U1636210 &", "supported in part by National Key China AAA0102301 , and NSFC 61421003 .", "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov.", "2017.", "Enriching word vectors with subword information.", "Transactions of the Association for Computational Linguistics , 5:135146.", "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herve Jegou.", "2017.", "Word translation without parallel data.", "arXiv preprint arXiv:1710.04087 .", "Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni.", "2014.", "Improving zero-shot learning by mitigating the hubness problem.", "arXiv preprint arXiv:1412.6568 .", "David Eppstein and Darren Strash.", "2011.", "Listing all maximal cliques in large sparse real-world graphs.", "In International Symposium on Experimental Algorithms , pages 364375.", "Springer.", "Yedid Hoshen and Lior Wolf.", "2018.", "Non-adversarial unsupervised word translation.", "In Proceedings of the 2018 Conference on EMNLP , pages 469478.", "Transactions of the Association for Computational Linguistics , 7:107120.", "HC Johnston.", "1976.", "Cliques of a graph-variations on the bron-kerbosch algorithm.", "International Journal of Computer & Information Sciences , 5(3):209238.", "Richard M Karp.", "1972.", "Reducibility among combinatorial problems.", "In Complexity of computer computations , pages 85103.", "Springer.", "Benjamin Marie and Atsushi Fujita.", "2018.", "Unsupervised neural machine translation initialized by unsupervised statistical machine translation.", "arXiv preprint arXiv:1810.12703 .", "Tomas Mikolov, Quoc V Le, and Ilya Sutskever.", "2013.", "Exploiting similarities among languages for machine translation.", "arXiv preprint arXiv:1309.4168 ." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "method", "abstain", "method", "objective", "method", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "method", "result", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other" ]
[ "Opinion target extraction and opinion words extraction are two fundamental subtasks in Aspect Based Sentiment Analysis (ABSA).", "Recently, many methods have made progress on these two tasks.", "However, few works aim at extracting opinion targets and opinion words as pairs.", "In this paper, we propose a novel sequence labeling subtask for ABSA named TOWE (Target-oriented Opinion Words Ex-traction), which aims at extracting the corresponding opinion words for a given opinion target.", "A target-fused sequence labeling neural network model is designed to perform this task.", "The opinion target information is well encoded into context by an Inward-Outward LSTM.", "Then left and right contexts of the opinion target and the global context are combined to find the corresponding opinion words.", "We build four datasets for TOWE based on several popular ABSA benchmarks from laptop and restaurant reviews.", "The experimental results show that our proposed model outperforms the other compared methods significantly.", "We believe that our work may not only be helpful for downstream sentiment analysis task, but can also be used for pair-wise opinion summarization.", "Sentiment analysis, also known as opinion mining (Pang and Lee, 2007; Liu, 2012), has drawn increasing attention of researchers and industries in recent years.", "It can provide valuable information from user-generated reviews.", "However, sentiment analysis at sentence level or document level sometimes cannot provide more detailed information, thus a finer-grained task, Aspect-Based Sentiment Analysis (ABSA) (Pontiki et al., 2014), is proposed to identify the opinions of a specific target or aspect Corresponding author.", "in reviews.", "ABSA consists of multiple subtasks including aspect category detection, opinion target extraction, aspect level sentiment classification etc.", "Opinion target extraction (OTE) and opinion words extraction (OWE) are two such fundamental subtasks.", "Opinion targets, sometimes called aspect terms, are the words or phrases in the sentence representing features or entities towards which users show attitude.", "Opinion words (or opinion terms) refer to those terms used to express attitude explicitly.", "For example, in the sentence The menu is limited but almost all of the dishes are excellent. , the words menu and dishes are two opinion targets, and the words limited and excellent are opinion words.", "More examples can be found in the upper part of Figure", "1. Recently, a great number of works based on neural networks have been done on these two subtasks (Liu et al., 2015; Poria et al., 2016; Xu et al., 2018).", "Furthermore, some works also integrate the two subtasks into a multi-task learning architecture to extract them jointly, which achieves great progress on both subtasks (Wang et al., 2016, 2017; Li and Lam, 2017).", "However, the extracted opinion targets and opinion words are not in pairs and the correspondence is not extracted.", "For instance, in the example sentence, (cid:104) menu:limited (cid:105) and (cid:104) dishes:excellent (cid:105) are two opinion pairs.", "Obviously, extracting them as pairs is significant for ABSA.", "Additionally, in Figure 1, the list of pairs extracted from the example review can be considered to be an extractive pair-wise opinion summarization.", "Considering the significance of the pairs in reviews and promising results of targets extraction in previous works, in this paper, we propose a new subtask for ABSA named TOWE (Target-oriented Opinion Words Extraction).", "Given a review and a target in the review, the objective of TOWE is to extract the corresponding opinion words describing or evaluating the target from the review.", "Then, TOWE can form pairs of the given target and its corresponding opinion words.", "Motivated by the success of neural networks in natural language processing, we design a powerful sequence labeling neural network model to perform TOWE.", "The task TOWE aims to extract the target-oriented opinion terms.", "In the same review, for different targets, the model needs to output different results.", "Therefore, a core challenge is the learning of target-specific context representations.", "We design a neural encoder to incorporate target information and generate the target-fused context.", "To be specific, we propose an Inward-Outward LSTM to pass target information to the left context and the right context of the target respectively.", "Then we combine the left, right and global context to encode the sentence and make sequence labeling.", "It is essential and reasonable to formulate TOWE as a sequence labeling task because some opinion terms include several words and one opinion target may correspond to multiple opinion terms.", "We try two different decoding strategies in the experiment.", "We propose a sequence labeling subtask for ABSA: TOWE (Target-oriented Opinion Words Extraction), which can offer assistance and interpretability for downstream tasks in ABSA.", "We design a novel sequence labeling neural network model to perform TOWE.", "It can generate target-specific context representations for different targets in the same review.", "We build four datasets from different domains serving as a benchmark for future works.", "We conduct extensive experiments on these datasets, and the results show that our model could significantly exceed a variety of baselines.", "A lot of works have been carried out for Opinion Targets Extraction.", "Traditional methods can be categorized into unsupervised/semi-supervised methods (Hu and Liu, 2004; Zhuang et al., 2006; Qiu et al., 2011) and supervised methods (Jakob and Gurevych, 2010; Shu et al., 2017).", "Recently, deep learning methods have also made progress in this task.", "Liu et al. (2015) apply a recurrent neural network with pre-trained word emebddings to solve this task.", "Yin et al. (2016) exploit a CRF with dependency-paths enhanced word embeddings for aspect term extraction.", "Poria et al. (2016) use a deep convolutional neural network (CNN) and Xu et al. (2018) propose a CNN model with double embeddings.", "Some works extract the targets and opinion words jointly as a co-extraction strategy.", "Qiu et al. (2011) propose double propagation to expand opinion targets and opinion words lists in a bootstrapping way.", "Liu et", "al.(2013) extract the targets and opinion words jointly with modeling the relation from a statistical word alignment model.", "This co-extraction strategy can also be adopted in neural networks with multi-task learning (Wang et al., 2016, 2017; Li and Lam, 2017).", "However, in all these works, the extracted targets and opinion words are separated.", "In the literature, only a few works discussed opinion pairs.", "Hu and Liu (2004) use the distance information and recognize the nearest adjective of target as the opinion words.", "Zhuang et al. (2006) utilize lexicons and human-built word lists to extract the targets and opinion words in the corpus, and then identify valid feature-opinion pairs with syntactic rule templates based on dependency parsing trees.", "However, these two methods heavily depend on the external resources such as parsers or lexicons and the performance of these approaches relies on the quality of parsing result.", "By contrast, our model is a purely data-driven supervised learning method and does not need any external linguistic knowledge, lexicons or handcrafted templates.", "Moreover, in these two methods, the process of detecting opinion words and the process of discovering correspondence is separated into two tasks, which suffers from error propagation.", "Our model for TOWE aims at detecting the corresponding opinion words in one step with sequence labeling.", "Given a sentence s = { w 1 , w 2 , . . . , w i , . . . , w n } consisting of n words, and a opinion target t in the sentence, the task is to make sequence labelling on the sentence to extract the target-oriented opinion words.", "We use the BIO tagging scheme (Ramshaw and Marcus, 1995) on this task.", "For each word w i in the sentence s , it should be tagged as y i { B, I, O } (B: Beginning, I: Inside, O: Others).", "For example, for different opinion targets, the sentence Waiters are very friendly and the pasta is out of this world . is tagged in w i /y i style as follows:", "1. Waiters/O are/O very/O [ friendly/B ] and/O the/O pasta/O is/O out/O of/O this/O world/O", "./O (Given opinion target: waiter , extract friendly as corresponding opinion word).", "2. Waiters/O are/O very/O friendly/O and/O the/O pasta/O is/O [ out/B of/I this/I world/I ]", "./O (Given Opinion target: pasta , extract out of this world as corresponding opinion words).", "Figure 2 shows the framework of our methods, which follows an encoder-decoder architecture.", "We propose a target-fused encoder to incorporate the target information into context and learn target-specific context representations, then pass them to the decoder for sequence labeling.", "In the target-fused encoder, we first use an Inward-Outward LSTM to model the left context and right context of the target, then combine them with the global context.", "In the decoder, we can adopt two different decoding strategies.", "We present the details of each component in the following sections.", "We first generate the input vectors for each word by using a embedding lookup table L R d | V | where d is the embedding dimension and | V | is the vocabulary size.", "The embedding lookup table will map s = { w 1 , w 2 , . . . , w t , . . . , w n } to a sequence of vectors { e 1 , e 2 , , e i , . . . , e n } as words representations where e i R d .", "Typically, neural sequence labeling models use recurrent neural networks, such as LSTM (Hochre-iter and Schmidhuber, 1997) or BiLSTM, to model the sentence.", "However, merely using BiLSTM to model the whole sentence is totally target-independent.", "For the different target terms in the same sentence, BiLSTM outputs equal representation and cannot generate target-specific results.", "As mentioned before, the core challenge of TOWE is the learning of target-specific context representations.", "It is evident that different targets have different positions in the sentence and thus different contexts.", "So, we first split the sentence into three segments: left context { w 1 , w 2 , , w l } , target term { w l +1 , , w r 1 } and right context { w r , , w n } and left and right contexts are target-specific.We use a left LSTM to model the left context plus target and a right LSTM to model the target plus right context respectively.", "In this way the target-specific contexts could generate target-specific context representations.", "However, the direction of the two LSTMs is a crucial problem.", "We can use a simple strategy called Inward-LSTM, which follows the design of TD-LSTM (Tang et al., 2016).", "As Figure 2 shows, Inward-LSTM runs the two LSTMs from the two ends of the sentence to the middle target respectively.", "It runs the left LSTM from the first word to opinion target as a forward-LSTM and a right LSTM from the last word to the opinion target as a backward-LSTM, so we call it as Inward.", "This is a process of passing the context to target.", "We obtain left context representations HL and right context representations HR as follows: h L i = LSTM ( h L i 1 , e i ) , i [1 , , r 1] , (1) h R i = LSTM ( h R i +1 , e i ) , i [ l + 1 , , n ] .", "It is obvious that the words of opinion target { w l +1 , , w r 1 } are represented twice in the left LSTM and right LSTM.", "We simply average the two representations for the same word to get the representation of target words: h LR i = ( h L i + h R i ) 2 , i [ l + 1 , , r 1] .", "(3) Then the context representation is: HI = { h L 1 , , h L l , h LR l +1 , , h LR r 1 , h R r , , h R n } .", "Although passing contexts to the target in Inward-LSTM is a good strategy for encoding whole sentence representation, only using this strategy is not enough for TOWE because the target information is not passed to the left and right context.", "For example, in the sentence I found the food to be out-standing. , the opinion target is food , the Inward-LSTM will first model outstanding and then model food .", "The representation of outstanding does not contain the information of food .", "To solve this problem, we design a novel strategy specifically for TOWE, i.e., Outward-LSTM.", "The idea of the Outward-LSTM is to pass the target to the context, which we believe is a better choice.", "As Figure 2 shows, the Outward-LSTM starts two LSTMs from the target in the middle and run towards the both ends of the sentence, which means the left LSTM is a backward LSTM and the right LSTM is a forward LSTM.", "We average the duplicate target hidden states and get the target-fused context representations HO = { h L 1 , , h L l , h LR l +1 , , h LR r 1 , h R r , , h R n } : h L i = LSTM ( h L i +1 , e i ) , i [1 , , r 1] , (4) h R i = LSTM ( h R i 1 , e i ) , i [ l + 1 , , n ] , (5) h LR i = ( h L i + h R i ) 2 , i [ l + 1 , , r 1] .", "(6) This concise and reasonable strategy can solve the problems remaining in the Inward-LSTM.", "As we start the LSTM from the target, the target's information is fused into each word in the sentence.", "Also, the Outward-LSTM ensures that for different targets each word has different representations.", "Take the sentence Its camera is wonderful but the battery life is short ! as an example.", "For target camera or battery life , the target-fused representations for short are different and can generate target-specific results.", "We can combine the both strategy and adopt an Inward-Outward LSTM (IO-LSTM).", "IO-LSTM concatenates the outputs of Outward-LSTM and Inward-LSTM.", "The output of Outward-LSTM is crucial for incorporating target information into context, while the Inwards-LSTM is included so they can complement each other and act as a Target-specific Bidirectional LSTM.", "The target-fused context representations are denoted as HIO : h IO i = (cid:2) h I i ; h O i (cid:3) .", "To extract the target-oriented opinion words, only considering the context of each side in isolation is not enough.", "The left context and right context in the IO-LSTM are separated, and the left LSTM and right LSTM only share the opinion target.", "It is important to understand the global meaning of the whole sentence while detecting the opinion words on the left and right context.", "So we introduce the global context to further improve the IO-LSTM.", "We use a BiLSTM to model the whole sentence embeddings e = { e 1 , e 2 , , e i , . . . , e n } and obtain global contextualized representation HG as follows: h G i = (cid:104) h i ; h i (cid:105) , (8) h i = LSTM ( h i 1 , e i ) , (9) h i = LSTM ( h i +1 , e i ) .", "(10)", "Then we combine left-right contexts from IO-LSTM and global context, as shown in Figure", "2. This enables us to obtain the final target-specific contextualized representation r for each word: r i = (cid:2) h IO i ; h G i (cid:3) .", "The final representation r is fused with both target information and global context information, which can be passed to the decoder for sequence labeling.", "Given a sequential representation r , we can use r to compute p ( y | r ) where y = { y 1 , , y n } are BIO-label sequence for the sentence and y i { B , I , O } .", "Two different decoding policies can be adopted.", "The first is greedy decoding, formulated as a three-class classification problem at each position independently.", "We use softmax to compute the probability: p ( y i | r i ) = softmax ( W s r i + b s ) .", "Greedy decoding just simply selects the tag with highest point-wise probability.", "It does not consider the dependencies between tags but runs faster.", "We use the negative log likelihood (NLL) as the loss for one sentence: L ( s ) = n (cid:88) i =1 3 (cid:88) k =1 I ( y i = k ) log p ( y i = k | w i ) .", "The second decoding method is to use Conditional Random Field (CRF) (Lafferty et al., 2001).", "CRF considers the correlations between tags in neighborhoods and score the whole sequence of tags.", "Specifically, we use a linear-chain CRF and score the tag sequence as conditional probability: p ( y | r ) = exp( s ( r , y )) (cid:80) y (cid:48) Y exp( s ( r , y (cid:48) )) , (14) where Y is the set of all possible tag sequences and s ( r , y )) = (cid:80) ni ( A y i 1 ,y i + P i,y i ) is the score function.", "A y i 1 ,y i measures the transition score from y i 1 to y i and P i = W s r i + b s .", "So we use negative log likehood as the loss of the sentence: L ( s ) = log p ( y | r ) .", "When given a new sentence for decoding, we will output the tag sequence that maximizes the conditional probability with Viterbi algorithm.", "Finally, we minimize the loss for training: J ( ) = | D | (cid:88) s L ( s ) .", "We build the datasets based on the SemEval challenge 2014 Task4, SemEval Challenge 2015 task 12 and SemEval Challenge 2016 task 5 (Pontiki et al., 2014, 2015, 2016).", "The SemEval challenge provides several datasets from restaurant and laptop do-main.", "These datasets are very popular benchmarks for many ABSA subtasks, including Aspect category detection, Opinion Target Extraction, Opinion Words Extraction and Target-Dependent Sentiment Analysis (TDSA).", "In the original datasets of SemEval challenge, the opinion targets (aspect terms) are annotated, but the opinion words and the correspondence with targets are not provided.", "So we annotate the corresponding opinion words for the annotated targets.", "Every sentence is annotated by two people, and the conflicts will be checked.", "Each instance of the datasets consists of a sentence, the position of the target and the positions of the corresponding opinion words.", "Note that we only keep the sentences that contain pairs of target and opinion words.", "The sentences without targets or with implicit opinion expressions are not included.", "Finally, we generate four datasets: 14res and 14lap from SemEval 2014, 15res from SemEval 2015 and 16res from SemEval 2016.", "14res , 15res , and 16res contain reviews from restaurant domain.", "The sentences in 14lap come from laptop domain.", "The statistics of the four datasets is shown in Table", "1. 4.2 Settings In our experiments, we initialize word embedding vectors with 300-dimension GloVe vectors which Dataset #sentences #targets 14res Training 1627 2643 Testing 500 865 14lap Training 1158 1634 Testing 343 482 15res Training 754 1076 Testing 325 436 16res Training 1079 1512 Testing 329 457 Table 1: Statistics of datasets.", "The word embeddings are fixed and not fine-tuned during the training stage.", "The dimension of hidden states in all the LSTM cell is set as 200.", "Adam (Kingma and Ba, 2015) is chosen as the optimization method with the default setting in the original paper.", "We randomly split 20% of the train set as dev set for tunning the hyperparameters and early stopping.", "Then we test the models on testing sets and the average result of five runs is reported.", "Precision, recall and F1 score are used as the metrics to measure the performance of models.", "An extracted opinion words span is regarded as a correct prediction when the starting and ending offset of the predicted span are both identical to those of a golden opinion words span.", "We compute Precision, Recall and F1 with the span as the unit.", "Since we are the first to study this sequence labeling task, there is no available sequence labeling model in the literature to be compared.", "Although there are a number of complicated models in TDSA, the task is different.", "Those TDSA models focus on sentence-level representations for sentiment classification, while for TOWE the representation on token-level representations is more crucial.", "Simply transferring the TDSA models for TOWE is not suitable.", "Except for two rule-based methods, we can only design and implement the baselines for TOWE ourselves.", "Our final model is the IOG encoder with a greedy decoding strategy.", "We compare it with the following baselines: Distance-rule : Hu and Liu(2004) use the distance and POS tags to determine the opinion words.", "Following this idea, we first use the nltk toolkit to make part-of-speech tagging on each word and select the nearest adjective from the target as the corresponding opinion word.", "Dependecy-rule : We adopt the strategies proposed in (Zhuang et al., 2006) which uses dependency-tree based templates to identify opinion pairs.", "The POS tag of opinion targets and opinion words and the dependency path between them in the training set are recorded as rule templates.", "1 The high-frequency dependency templates are used for detecting the related opinion words in the testing set.", "LSTM/BiLSTM : This method is an LSTM/BiLSTM network built on top of word embeddings proposed by (Liu et al., 2015).", "We pass the whole sentence into the LSTM/BiLSTM and each hidden state is fed to a softmax layer for three-class classification, which works as sentence-level opinion words extraction.", "Pipeline : This method combines BiLSTM and Distance-rule method in a pipelined way.", "We first train a sentence-level opinion words extraction model with BiLSTM and extract all the opinion words in the test sentences; then we select the closest extracted opinion words of the target as the result.", "Target-Concatenated BiLSTM (TC-BiLSTM) : This method incorporates the target information into sentence by concatenation.", "A target vector is obtained by the average pooling of target word embeddings.", "The word representation at each position is the concatenation of word embedding and target vector, which is then fed into a BiLSTM for sequence labeling.", "The main results can be found in Table", "2. Note that all the neural models in Table 2 adopt greedy decoding.", "The performance of Distance-rule method is not satisfactory and the worst among all the methods; its recall score is especially low.", "IOG obtains an F1 score with a greater-than 30% improvement over the Distance-rule method.", "Dependency-rule method obtains a general improvement than Distance-rule, but it was still lower than the below sequence-labeling based methods.", "This reveals the 1 We use the parsers in spaCy: https://spacy.io Models 14res 14lap 15res 16res P R F1 P R F1 P R F1 P R F1 Distance-rule 58.39 43.59 49.92 50.13 33.86 40.42 54.12 39.96 45.97 61.90 44.57 51.83 Dependency-rule 64.57 52.72 58.04 45.09 31.57 37.14 65.49 48.88 55.98 76.03 56.19 64.62 Pipeline 77.72 62.33 69.18 72.58 56.97 63.83 74.75 60.65 66.97 81.46 67.81 74.01 LSTM 52.64 65.47 58.34 55.71 57.53 56.52 57.27 60.69 58.93 62.46 68.72 65.33 BiLSTM 58.34 61.73 59.95 64.52 61.45 62.71 60.46 63.65 62.00 68.68 70.51 69.57 TC-BiLSTM 67.65 67.67 67.61 62.45 60.14 61.21 66.06 60.16 62.94 73.46 72.88 73.10 IOG 82.85 77.38 80.02 73.24 69.63 71.35 76.06 70.71 73.25 85.25 78.51 81.69 Table 2: Main Results in terms of Precsion, Recall and F1-score.", "lack of robustness in rule-based approaches.", "The error propagation from syntactic parsers is also a reason for poor performance.", "The Pipeline model performs much better than rule-based methods, obtaining an especially high precision, showing that machine-learning methods can obtain better opinion words extraction.", "However, pipeline model is still not ideal, and the F1-score is approximately 10% lower than our proposed model in several datasets.", "This reflects that the distance information is not sufficient for detecting target-oriented opinion words while IOG could better handle long distance dependency problem.", "Also, this strategy cannot solve the cases where one target corresponds to more than one opinion term.", "It also suffers from error propagation.", "LSTM and BiLSTM are both target-independent leading to low precision, and their performance is even worse than pipeline method.", "IOG outperforms BiLSTM by about 15% averagely, which indicates the target information should be included.", "TC-BiLSTM includes the target information by concatenation and obtains better general performance than LSTM and BiLSTM.", "However, TC-BiLSTM is still over 10% lower than IOG and is slightly inferior to Pipeline, showing that the concatenation is not a good way to incorporate the target information for TOWE.", "We believe that the problem is that the concatenated target may interfere with the other targets in the same sentence.", "datasets from different domains compared to both the rule-based methods and neural models.", "We can conclude that IOG can learn target-specific representations more effectively and can better capture the correspondence between targets and opinion words.", "To compare the different design of our model and provide more compared models, we also report the results of the variants of our models in Table", "3. Inward-LSTM : HI computed from (1), (2), (3) are fed to the greedy decoder for sequence labeling.", "Outward-LSTM : HO computed from (4), (5), (6) are fed for greedy decoding IO-LSTM : Combining Inwards-LSTM and Outwards-LSTM, HIO is obtained by concatenation of HI and HO in (7), which is then used for greedy decoding.", "IOG+CRF : Passing the representations r in IOG to a CRF decoder.", "The performance of Inward-LSTM is inferior, similar to the target-independent BiLSTM.", "This demonstrates that only passing the context to target is similar to not considering the target information owing to the problems we discussed before.", "The F1-score of Outward-LSTM exceeds that of the Inward-LSTM by more than 10%.", "This shows Sentence Distance-rule Dependency rule Pipeline BiLSTM TC-BiLSTM IOG The bread is top notch as well .", "that passing target into context is a better choice and learning the target-specific word representations is crucial.", "In fact, Outward-LSTM has already outperformed all the previous baselines, which indicates that this is a really good design for TOWE.", "IO-LSTM which combine the Inward and Outward is slightly better than Outward-LSTM, showing that Inward-LSTM can still provide supplementary information for Outward-LSTM.", "Through combining global context with IO-LSTM as IOG model, we roughly obtain a further 1% improvement.", "We also test our model with a linear Conditional-Random-Field as the decoder.", "CRF considers the label dependencies.", "It can be observed that IOG with CRF obtains a slight improvement.", "To demonstrate the effectiveness of our model, we pick some examples in the test dataset in 14res and", "show the extracted results of different models.", "In the first sentence, since the Distance-rule cannot extract phrases, the extraction it makes is incorrect.", "In addition, merely selecting the nearest adjective using the Distance-rule approach does not enable coverage in all cases, as shown in the second and third sentence (e.g., the asian and lyche ).", "Dependency-rule in some cases fails to extract any word owing to the error of parser and no template to match.", "Pipeline method has the problem that it cannot handle the cases that one target corresponds to multiple opinion terms (e.g., not great is not extracted in the fourth sentence).", "The drawback of BiLSTM is that it does not include target information, so it extracts both love and good in the third sentence while only love is the corresponding opinion word for drinks .", "Although TC-BiLSTM is a target-specific model, it tends to extract irrelevant opinion words because of the interference from concatenation.", "In the last two rows of Table 4, we show the same sentence with two different targets and only IOG does not make mistakes for both targets.", "In this paper, we propose a novel subtask for aspect-based sentiment analysis: Target-oriented Opinion Words Extraction (TOWE) which aims at extracting the corresponding opinion words for a given opinion target.", "We design a novel neural model IOG to solve this task.", "IOG can effectively encode target information into left and right context respectively.", "Then we combine the left and right context of the opinion target and global context for extracting the corresponding opinion word in the decoder.", "We contribute four datasets based on several benchmarks.", "The experimental results demonstrate that our model achieves the best performance across all the datasets from different domains.", "In future works, TOWE could be utilized to further improve the performance on downstream sentiment analysis tasks with building a more interpretable model, such as enhanced-feature or multitask learning.", "In addition, an end-to-end opinion extractive summary method without given golden targets is also a future work.", "The authors would like to thank Fang Qian and Fei Zhao for their contribution to building the datasets, and Robert Ridley for his comments on this paper.", "We also express the gratitude to the anonymous reviewers for their valuable feedback.", "This work is supported by the National Natural Science Foundation of China (No. 61672277, U1836221), the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "objective", "objective", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "method", "objective", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "Abstract", "The code and data are available at https: //github.com/wenhuchen/LogicNLG .", "Neural natural language generation (NLG) models have recently shown remarkable progress in fluency and coherence.", "However, existing studies on neural NLG are primarily focused on surface-level realizations with limited emphasis on logical inference, an important aspect of human thinking and language.", "In this paper, we suggest a new NLG task where a model is tasked with generating natural language statements that can be logically entailed by the facts in an open-domain semi-structured table.", "To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset (Chen et al., 2019) featured with a wide range of logical/symbolic inferences as our testbed, and propose new automatic metrics to evaluate the fidelity of generation models w.r.t. logical inference.", "The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order.", "In our experiments, we comprehensively survey different generation architectures (LSTM, Transformer, Pre-Trained LM) trained with different algorithms (RL, Adversarial Training, Coarse-to-Fine) on the dataset and made following observations: 1) Pre-Trained LM can significantly boost both the fluency and logical fidelity metrics, 2) RL and Adversarial Training are trading fluency for fidelity, 3) Coarse-to-Fine generation can help partially alleviate the fidelity issue while maintaining high language fluency.", "Neural network models, especially the recent wave of massive models like BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019), have shown the ability to generate natural language text at an astonishing level of fluency and coherence.", "For the generated text to fulfill its purpose, however, a crit-Nation Gold Medal Silver Medal Bronze Medal Sports Canada 3 1 2 Ice Hockey Mexico 2 3 1 Baseball Colombia 1 3 0 Roller Skating Sentence : Canada obtained 1 more gold medal than Mexico.", "ical property that is necessary but often overlooked is fidelity , i.e., what is generated should be faithful to the underlying data, knowledge, or meaning representation.", "A line of recent work has started to address the surface-level fidelity issue of natural language generation (NLG) by encouraging the model to learn to reuse the verbatim of certain inputs through copy mechanism (See et al., 2017; Gu et al., 2016; Wiseman et al., 2017; Liu et al., 2018), structured attention (Liu et al., 2018), or planning and selection/entity modeling (Puduppully et al., 2019a,b).", "While shown to be effective, most such methods so far are primarily focused on surfacelevel realization and simply restate the facts in the underlying data (Figure 1).", "However, humans have the ability to generalize beyond superficial facts (e.g., Canada has got 3 gold medals. ) by inferring and communicating with new statements that can be entailed from these facts (e.g., Canada obtained the most gold medals. ).", "We believe it is important for NLG models to be able to generalize beyond the superficla facts given to them as well.", "Therefore, we propose a new task, logical NLG , where a model is tasked Colombia has 4 medals in total.", "with generating natural language statements that can be logically entailed by the given data (i.e., the premises ).", "The new task requires a model to jointly reason and generate sentences that are consistent both linguistically and logically.", "Since there are a variety of reasoning/inference tasks such as natural language inference (Bowman et al., 2015) and commonsense reasoning (Talmor et al., 2019), to avoid confusion, this paper is specifically focused on inferences involving symbolic operations over the given table (Pasupat and Liang, 2015).", "To empower research in this direction, we col-lect a new corpus LOGICNLG based on the existing TabFact (Chen et al., 2019), which brings two major renovations to the existing NLG paradigm: 1) the text involves diversified types of logical inferences including math operations like max/min/sum/add, comparison operations like same/different, and counting operations like to-tal/only.", "A more detailed description of logical inference is listed in the Appendix.", "2) while existing datasets are often restricted to a specific domain such as weather (Liang et al., 2009), restaurant (Dusek et al., 2019), NBA (Wiseman et al., 2017), etc, LOGICNLG uses open-domain tables without prior knowledge about their schema.", "As such, existing methods based on surface-level copying (See et al., 2017; Gu et al., 2016; Puduppully et al., 2019a) becomes insufficient, so are the existing fidelity evaluation based on the surfacelevel information extraction (Wiseman et al., 2017; Rohrbach et al., 2018; Dhingra et al., 2019), which extracts surface triples in a certain pre-defined form (i.e. subj-pred-obj, n-gram) and compare them with the surface content given in the knowledge.", "Most neural generation models follow a monotonic generation schema from left to right with the current prediction only depending on the preceding words.", "Logical NLG poses unique challenges to the traditional generation scheme due to the mismatch between sequence order and logical order .", "As illustrated in Figure 2, the word 2 is derived from the logical inference of diff(Silver medal of Colombia, Silver medal of Canada)) 2.' In other words, the logical order of word 2 should be after more, silver, and Canada, while the sequence order of 2 is before those words.", "Since the monotonic generation scheme is purely based on sequence order while agnostic to logical order, existing NLG models struggle to maintain the fidelity as they cannot model the logical dependency on future tokens.", "To alleviate such an order mismatch, an NLG model must have the capability to plan ahead for the next few steps before generation.", "In this context, we believe LOGICNLG to be an important testbed to study such a planing/inference ability in generation models (Ford et al., 2018; Welleck et al., 2019).", "In this paper, we further propose a non-monotonic coarse-to-fine generation model and show that it is able to alleviate the order mismatch problem and achieve better performance.", "The contribution of this work is three-fold: i ) We propose a new research problem of logical natural language generation, and provide novel metrics to approximately evaluate the logical fidelity of generation models.", "ii ) We justify the mismatch problem between sequence order and logical order of the traditional monotonic generation scheme in logical NLG.", "iii ) We conduct comprehensive experiments with state-of-the-art neural generation models under both automatic and human evaluation, which demonstrates the challenges and opportunities for future research on logic NLG.", "Existing NLG datasets (Chen and Mooney, 2008; Dusek et al., 2019; Lebret et al., 2016; Liang et al., 2009) are mainly composed of surface-level description over the given records.", "Though ROTOWIRE (Wiseman et al., 2017) involves sporadic inference in the long document, and the inference is restricted to domain-specific knowledge (e.g. double-double, smash, triple-double and other NBA-related terms).", "Hence, we need a better testbed for studying the proposed problem.", "Statistics We construct a dataset based on TabFact (Chen et al., 2019), which is a table-based fact-checking dataset with rich logical inferences in the annotated statements.", "Specifically, we took their positive statements (the sentences which are en-Vocab Examples Vocab/Sent Tables Domain Source Inference Schema WEATHERGOV 394 22.1K 0.01 22.1K Weather Crawled No Known WikiBIO 400K 728K 0.54 728K Biography Crawled No Limited ROTOWIRE 11.3K 4.9K 0.72 4.9K NBA Annotated Few Known LOGICNLG 122K 37.0K 3.31 7.3K Open Annotated Rich Unlimited Table 1: Comparison of LOGICNLG against existing NLG datasets in different aspects.", "tailed by the knowledge in the table) collected from complex channel (required to annotate sentences with logical inference) as our target text.", "To prevent confusion with the original dataset, we name this table-to-text dataset as LOGICNLG, which contains 28,450 training, 4,260 validation and 4,305 test examples based on 7,392 open-domain tables crawled from Wikipedia.", "Each table has 5 different examples covering diverse types of logical inference.", "More detailed statistics and comparisons are listed in Table", "1. LOGICNLG is distinguished from the existing datasets due to: i ) It involves very rich logical inference, every annotated sentence involves certain types of inference with minimum domain-specific knowledge.", "The open-domain characteristic simulates a realistic setting, where we cannot enumerate the possible inference based on the scheme, which poses great challenges to the model's generalization capability.", "ii ) It is mainly composed of short sentences with an average length of 11 and a simple syntactic structure, which isolates from other linguistic complexity to focus on the problem of logical inference.", "The dataset contains tables with open schema crawled from diversified domains Figure", "4. The major categories are sports, politics, and entertainment.", "The schema diversity of the tables make the rule-based system infeasible to apply.", "Besides, most of the tables have very rich numeral records, which provide a great testbed for logical inference.", "Problem Definition Here, we formally define our proposed table-to-text generation task.", "The input is a table T with its title denoted as a natural language sequence W .", "The table T = { T i,j | i RT , j CT } has RT rows and CT columns with 0% 10% 20% 30% 40% Domain Distribution of Tables Team/Player (Sports) Compeition (Sports) Politics Entertaiment Celebrity Science Figure 4: The domain distribution of LOGICNLG.", "the T ij being the content in the ( i, j ) -th cell.", "T ij could be a word, a number, a phrase or even a natural language sentence.", "The annotated statement is a sentence Y = y 1 , y 2 , , y n , we aim to train a neural generation model p ( Y | T ) to generate statement Y which are both fluent and logically (numerically) supported by the given table T .", "In this section, we discuss the evaluation of our proposed NLG task.", "The fluency evaluation is simply based on the standard metrics like Perplexity (Ben-gio et al., 2003) and BLEU-1,2,3 (Papineni et al., 2002) based on NLTK (Bird, 2006).", "The most challenging problem is to evaluate the logical fidelity of the generated sentences, which is also the core problem of our paper.", "The existing IE-based extractive evaluation (Wiseman et al., 2017) leads to two issues as shown in Figure 3: 1) Empty Extraction: the sentence can not be formulated as (subject, predicate, object) structure, thus the IE system fail to extract triples for verification.", "2) False Negative: the sentence is a logical composition (instead of surface form) of the fact from the table, the IE system cannot match it against the table.", "For these reasons, we test two approximate automatic metrics: Sentence: Canada obtained 1 more gold medal than Mexico Eq(Hop(Filter(Nation==Canada), Gold Medal) 1) Parsing [Link->Search] True False Sentence: Canada obtained 1 more gold medal than Mexico Table: In the first row .", "Parsing-based Evaluation We first propose a model-based evaluation method, which aims to directly extract the meaning representation from the generated sentence and execute it against the table to verify its correctness. Our evaluation is based on weakly-supervised semantic parsing (Liang et al., 2009, 2013), the basic idea is to first link entities and predicates in the sentence, and then use linked entities to perform a breadth-first search to synthesize potential logical forms, finally, a scorer is used to re-rank these logical forms and filter out spurious ones. The logical form returns a binary value of True to indicate whether its logic is supported by the knowledge. The basic idea is shown in the upper part of Figure 5, the implementation details are in the Appendix. We pre-train the semantic parser f on the training set ( T , Y ) D train with weakly supervised algorithm, at test time, we use it to parse a sentence Y into a set of logical forms, which is re-ranked to obtain the highest logical form P best . We compute the ratio of P best returning true on D test to approximate model's fidelity.", "NLI-based Evaluation We then propose an-other model-based evaluation method to complement the parsing-based evaluation (which is sensitive to semantic variation), the basic idea follows (Kryscinski et al., 2019) to evaluate the entailment score between the table and the generated sentence. The NLI model is based on Table-BERT (Chen et al., 2019), which linearizes the table into textual form and uses it as the evidence for natural language inference. The model is trained with TabFact (Chen et al., 2019) dataset containing both positive/negative samples. During the evaluation, we use this NLI model to predict the entailment relationship based on the likelihood of", "p NLI ( Y | T ) . Finally, we compute the ratio of en-tailed to approximate model's fidelity:", "Adversarial Evaluation Adversarial evaluation (Goodfellow et al., 2014; Kannan and Vinyals, 2017) is used to study the generation model's robustness in logical reasoning. Specifically, we hire human workers from Amazon Mechanical Turk 1 to annotate adversarial examples for the test/validation set by simply changing minimum words to revert the logic of the sentence. Such adversarial examples preserve linguistic components like length and style except the logic-related words to specifically disentangle the generation model's reasoning skill. As drawn in the lower part of Figure 5, the original sentence modifies its word more into less as an adversarial example. There are two principles the workers need to follow to make their jobs accepted: 1) the modified words/phrases should be roughly equally frequent to balance the language prior, for example, the number 1 is better swapped with 2,3 rather than 9999 which rarely appears in the corpus. 2) the perturbation should be diverse enough to cover different aspects of logical reasoning skills. We use the generation model p ( Y | T ; ) to score the original sentence Y and the adversarial sentence Y adv . If the confidence of the original example is higher than its adversarial counterpart, we count it as a successful defense, otherwise as a failed defense. We use the success rate to approximate model's logical reasoning capability.", "where I is the indicator function.", "Discussion Both types of metrics have pros and cons, the SP-Acc and NLI-Acc are two metrics unbiased as it measures the peak samples in the model's likelihood, however, both metrics are based on imperfect models and thus their evaluation scores are inaccurate. SP-Acc is more sensitive to number/calculation errors, and NLI-Acc is more sensitive to semantic errors, therefore, we report both of them to help increase the metrics' robustness. In contrast, the adversarial evaluation score is accurate in terms of reflecting the model's reasoning capability on the given samples. However, as the provided samples might not lie in the high-confidence area of the model's distribution, it is biased in reflecting the model's general reasoning capability. Though these fidelity metric models are prone to errors, in section 6, we show their consistency with human judgment, which reveals their potential to assist human evaluation.", "In this section, we design comprehensive baseline models to perform logical NLG. Specifically, we consider the following two cases: non-pretrained models (LSTM/Transformer) with copy mechanism and pre-trained models (GPT-2 and BERT) with sub-word unit. We train these models with three different algorithms: Maximum Likelihood, Adversarial Training, and Reinforcement Learning.", "Here we mainly consider two table encoding methods, namely field-infusing and field-gating. These two methods differ in their strategies to coalesce the field information into cells. After the table is represented as a sequence of vectors, a decoder based on LSTM (Hochreiter and Schmidhuber, 1997) or Transformer (Vaswani et al., 2017) is applied to generate text token by token. The two methods are depicted in the upper part of Figure 6:", "Field-Infusing This strategy is inspired by Lebret et al. (2016). We first use an LSTM (Hochre-iter and Schmidhuber, 1997) to encode the table field text word by word and then use the last output z i as field representation. This representation is concatenated with the embedding of row index # j and word embedding at each cell to obtain a position-aware cell embedding e k for each word inside the cell. We stack transformers layers on top of the cell embedding to obtain the table representation as h i RD with D as the dimension.", "Field-Gating This strategy is inspired by by Liu et al. (2018). Like the previous strategy, we first use an LSTM (Hochreiter and Schmidhuber, 1997) to obtain field representation z i . The field representation is concatenated with ending distance information as the input to an additional field gate built inside the LSTM as suggested in Liu et al. (2018), such a field gate is used to control whether the current cell is already encoded. Such a mechanism can help LSTM to identify the boundary between different cells to grasp local information.", "To further enhance the fluency and resolve the out-of-vocabulary problem, we use pre-trained language models and finetune them on LOGICNLG. Specifically, we consider two models based on GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2019), respectively, and name them as GPT-TableGen and BERT-TableGen.", "Table Linearization We follow previous work on linearizing knowledge base as natural language (Liu et al., 2019; Zhang et al., 2019) to propose table linearization, which uses template to flatten the table T as a document PT = w 1 , , w | T | fed into pre-trained language models to generate statement Y , where we use w i to denote the i -th word in the generated paragraph PT and | T | to denote the length of the paragraph (the word w i is either a table entry or a functional word in the template). As depicted in the left bottom part of Figure 6, the original table T is transformed into a paragraph by horizontally scanning each cell T 11 T 1 ,C T TRT ,C T in the table.", "GPT-TabGen we directly feed the paragraph PT as the input to the pre-trained GPT-2 model and generate the output sentence Y . We finetune the model on LOGICNLG by maximizing the likelihood of p ( Y | PT ; ) , with denoting the parameters of GPT-2 model (Radford et al., 2019).", "BERT-TabGen 1) we encode the linearized paragraph PT using the pre-trained BERT model into the source representation h 1 , , h | T | . 2) at the i -th time step, we replace all the words in the groundtruth statement Y after i -th time step by < MASK > token and use BERT to encode the partially masked Y i as g i1 , , g in . 3) we use an attention layer f to obtain the output hidden states g i1 , , g in , where g ii is used to predict the word y i . We jointly optimize of BERT and to maximize", "the likelihood of generating text Y conditioned on the table and the masked partial sentence.", "As BERT is a bidirectional model, we need to re-encode the target sentence at each step to get g i1 : n .", "Therefore, the generation is finished with n passes.", "Except for the standard maximum likelihood training, we also use the following training algorithms:", "Adversarial Regularization To encourage the model to ground on the table rather than relying on artificial language priors (Ramakrishnan et al., 2018), we use an adversarial regularization to enhance the maximum likelihood training.", "Specifically, we first perform entity resolution to locate all the numbers, count, entities in the sentence and then randomly replace them with entities or numbers appearing in the table T .", "These perturbed samples Y adv are used as adversarial examples to regularize the model's behavior.", "Formally, we optimize to maximize the objective: argmax log p ( Y | T ; ) log p ( Y adv | T ; ) where is the controlling hyper-parameter.", "Reinforcement Learning The maximum likelihood training is a fluency-driven objective, which is inconsistent with the goal of logical consistency.", "To bridge the gap, we view the generation problem from the reinforcement learning perspective to optimize the long-term fidelity.", "We use the trained semantic parser to assign reward to the policy p ( y i | y 1: i 1 ; ) .", "At i -th step, the generator will sample different actions y i and roll-out from i + 1 th step to produce a full sequence starting from y i using greedy search.", "The full sentence receives a binary score r ( Y, T ) from the semantic parser as reward.", "Formally, we optimize the objective: argmax E y i p ( y i | y 1: i 1 ) [ E y i +1: n [ r ( y 1: n , T )]] log p ( y i | y 1: i 1 ; ) where we only use one trajectory to approximate the inner roll-out expectation for efficiency.", "As discussed before, the baseline models follow the monotonic generation scheme and suffer from the mismatch between sequence order and logical order (Figure 2).", "In this section, we propose an imperfect remedy for such a situation based on the coarse-to-fine generation paradigm.", "Before plunging into technical details, it is helpful to first realize the resemblance between logical NLG and semantic parsing (Dong and Lapata, 2018).", "Compared to traditional NLG tasks like machine translation and summarization, logical NLG is closer to semantic parsing in the sense that a model may make catastrophic errors that are impossible to be corrected at later steps (Figure 2).", "Therefore, we take inspiration from semantic parsing models (Dong and Lapata, 2018) that have proven effective in mitigating such errors and propose a coarse-to-fine generation scheme.", "We break down generation into two phases.", "In the first phase, !", "the model only generates a template which determines the global logical structure, while in the second phase the model generates the final, grounded sentence conditioned on the template generated in the first phase.", "As depicted in Figure 7, we use the entity linker (Section 3) to identify the entities and numbers in the original sentence Y and replace them with placeholder [ENT], which we call as the template YT .", "During the generation of GPT-TabGen, instead of directly predicting the fi-nal sentence Y , we first predict the template YT and then Y .", "The process is simply realized by maximizing the overall likelihood of p ( Y | T ; ) , where Y = [ YT ; [SEP] ; Y ] .", "Unlike template-based or delexicalized generation (Reiter and Dale, 1997; Wen et al., 2015), which uses rigid slot filling prone to grammatic errors, our fine-grained generation has the flex-ibility to modify the surface form of non-slot words, which alleviates the linguistic coherence problem (Sharma et al., 2017).", "By decoupling sentence structure generation and entity grounding, our proposed coarse-to-fine scheme could partially alleviate the mismatch problem.", "For example, the generation of Canada is now aware of more than in the latter part of the sentence, which exposes the model to more context than standard monotonic models to help make logically consistent decisions though the dependency on the 1 and Mexico is still not captured.", "The proposed two-step generation could be viewed as the first step towards a fully non-monotonic generation model to solve such mismatch problem.", "In this section, we explain the experimental details and then comprehensively report the automatic evaluation of different generation models and training algorithms.", "Finally, we will conduct detailed human evaluation and error analysis.", "For the non-pretrained models, we fix the hidden size of both LSTM and transformer to be 256, the transformer is 3-layered with 4 heads, while LSTM is also 3-layered.", "We use Adam optimizer (Kingma 0 0.2 0.4 0.6 0.8 Non-Sense Wrong Partial Correct Correct Human Evaluation Results on Different Models Transoformer GPT-2 Adv-Reg RL Coarse-to-Fine 0 0.1 0.2 0.3 0.4 0.5 0.6 S up e r l a t i v e O n l y B e f o r e / A f t e r C o un t C o m p a r i s o n B o t h / N e i t h e r S u m / D i ff A v e r a g e U n i qu e Generation Accuracy of different logic types Figure 8: The human evaluation results of different models on the sampled sentences. and Ba, 2015) with a learning rate of 2e-4 to jointly optimize the parameters and keep the model with the best perplexity on the validation set.", "During test time, we use a greedy search to generate text and calculate the BLEU-1,2,3 scores with the 5 references from the table.", "For the pre-trained models, we base our implementation on Huggingface's Transformer (Wolf et al., 2019) for both BERT (De-vlin et al., 2019) and GPT-2 (Radford et al., 2019) with subword unit vocabulary of 30K.", "During linearization, we found that using the whole table compromises the performance greatly, partly due to 1) over-length issue with pre-trained LM, 2) too much irrelevant information input.", "Therefore, we propose to use partial table as input, specifically, we run entity linking over the sentences to detect the linked columns of the table and only linearize the partial table as input PT .", "Both are finetuned using Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e-6.", "In both adversarial training and reinforcement learning algorithms, we add maximum likelihood objective to stabilize the training, we select the appropriate balancing factor based on the validation Adv-Acc socre.", "For coarse-to-fine training, we first warm up the model to generate the template sequence and then finetune it on the concatenated full sequence.", "Model selection is based on the bleu-3 score on validation split.", "We first perform an automatic evaluation to approximately measure the performance of different models and then conduct an in-depth human evaluation to have a better understanding.", "Automatic Evaluation: The experimental results are summarized in Table 2, where we comprehensively survey different architectures and training algorithms.", "For the non-pretrained models, we observe that Transformer is slightly better than LSTM and two different table encoding strategies achieve similar results.", "In contrast, pre-trained models are much better at lowering the perplexity, besides the generated sentences significantly outperform the non-pretrained models in terms of both fluency and fidelity score with GPT-TabGen and BERT-TabGen achieving similar performance.", "As the BERT-TabGen runs much slower due to multiple passes of decoding, we favor GPT-TabGen in the following experiments.", "With the adversarial regularization and reinforcement training, the model can only improve the optimized fidelity metric, with the fluency scores dropping significantly.", "Such phenomena confirm our assumption about the caveats of the monotonic generation paradigm.", "For the proposed coarse-to-fine generation scheme, as the [ENT] tokens are replaced by entity names, which normally contain a phrase like Feb 2nd.", "Such n-gram phrase substitution preserves the completeness of entity names and thus leads to higher 2/3/4-gram matches, which translates to higher BLEU-3 and lower BLEU-1 in Table", "2. The proposed coarse-to-fine generation can yield reasonable improvement over NLI-Acc and Adv-Acc, which demonstrates its advantages of in capturing logical dependency.", "Human Evaluation To further investigate the quality of the generated text, we propose to perform human evaluation.", "Specifically, we sample 200 sentences from different models and distribute them independently to human experts (graduate students from the computer science department) to verify their quality.", "Specifically, the quality measure is categorized into categories: 1) non-sense: the sentence does not make much sense, which is mainly due to disfluency or repetition problem.", "2) wrong: a fluent sentence with wrong logic.", "3) partial-correct: the sentence contains more than one fact, at least one of them is correct 4) correct: the high-quality in both fluency and logic correctness.", "We demonstrate the results in Figure 8, from which we observe that pre-training significantly decreases the non-sense proportion.", "However, the RL and Adv-Reg both harm the fluency and lead to more non-sense sentences.", "In contrast, the coarse-to-fine model can maintain the non-sense proportion while significantly increasing correct/partial-correct sentences.", "From human evaluation, even the best performing model can get slightly over 20% of its prediction logically correct, which reflects the challenges of LOGICNLG for existing paradigm.", "Evaluation Metrics We here analyze the effectiveness of the defined automatic evaluation metrics for fidelity evaluation.", "For the Parsing-based evaluation and NLI-based evaluation, we use the adversarial set (containing positive/negative sample pairs) to evaluate their consistency with human judges.", "Parsing-based model only achieves an accuracy of 60%, while NLI-based model achieves a higher accuracy of 65%.", "It indicates that the fidelity measurement model is itself a very challenging problem and the existing models are still in a premature stage.", "Therefore, the exact number of SP-Acc or NLI-Acc cannot reliably reflect the exact proportion of sentences logically entailed by the table.", "However, we still believe they are informative for model development based on the following reasons: 1) the automatic fidelity scores are quite stable, not sensitive to random initialization or different configurations, 2) when comparing different models (Transformer vs. GPT-2 vs. RL/Adv-Reg vs. Coarse-to-Fine), the trends of different automatic scores are consistent with human evaluation, which indicates its potential in assisting the development of new models.", "Fine-grained Analysis To better understand the generation model's reasoning capability in regarding different logical operations, we pick the most frequent 9 operations (definition in the Appendix) and analyze the best model's capability in expressing these different logic.", "We demonstrate our human evaluation in Figure 8 to make the following inspections: 1) the model performs best in justifying the order of different entities (before/after) and relating two entities (both/neither/comparison).", "2) the model performs reasonably well at superlative and count operation.", "3) the generation model performs much worse in operations like only, unique.", "4) the model is not able to perform mathematical aggregation like average, sum, etc.", "Overall, the string-based operations are easier than numeric-based operations, how to infuse the numeric knowledge is an open research question to move forward.", "Natural Language Generation Natural language generation is a long-standing problem (Ku-kich, 1983; Holmes-Higgin, 1994; Reiter and Dale, 1997), which involves generating text from records or data.", "Recently, many neural-based generation models have been proposed (Puduppully et al., 2019a,b; Lebret et al., 2016; Wiseman et al., 2018) to achieve impressive performance on the existing datasets (Chen and Mooney, 2008; Liang et al., 2009; Lebret et al., 2016; Dusek et al., 2019; Wiseman et al., 2017) since the annotated text are mostly surface-level annotation without logical inference.", "Unlike them, LOGICNLG has rich inference, which poses great challenges to existing models and evaluations.", "Non-monotonic Generation There have been attempts recently to study the problem of non-monotonic text generation, which aims to teach the generation model to learn the generation order without external supervision (Ford et al., 2018; Welleck et al., 2019; Gu et al., 2019; Mansimov et al., 2019).", "These models have shown to learn rational generation order to approach similar performance as the left-to-right case.", "These approaches are useful at capturing more sophisticated dependency within the sentence, which provides a plausible direction to pursue in LOGICNLG.", "Factualness Evaluation Fidelity is an important research topic in generation, In ROTOWIRE (Wise-man et al., 2017) and MSCOCO (Lin et al., 2014), IE-based extractive evaluation (Rohrbach et al., 2018; Dhingra et al., 2019) are adopted for surfacelevel matching to replace costly human evaluation.", "In abstractive summarization, Goodrich et al. (2019) proposes NER + Relation Classification method to investigate fidelity in generated summarization while Kryscinski et al. (2019) proposes to use NLI models to understand the entailment between generated text with the given document.", "These evaluations are beyond surface-level to study more sophisticated linguistic phenomena like paraphrasing, compression, entailment, inclusion, etc, which are common in summarization tasks.", "In this paper, we propose logical NLG to study the logical inference problem in generation.", "We conduct comprehensive experiments to show the existing NLG models are restricted by its monotonic nature and conclude this to be a proper next-step problem to study NLG systems.", "There are still some unsolved problems for Logical NLG, e.g. how to improve the quality of automatic metrics to better help human automatically judge models' performances.", "To promote the research in this direction, we host a LogicNLG challenge 2 to help better benchmark the current progress.", "The authors would like to thank the anonymous reviewers for their thoughtful comments." ]
[ "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "method", "method", "abstain", "other", "method", "other", "other", "other", "method", "other", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "result", "abstain", "abstain", "other" ]
[ "Recent advances in prompt-based learning have shown strong results on few-shot text classification by using cloze-style templates.", "Similar attempts have been made on named entity recognition (NER) which manually design templates to predict entity types for every text span in a sentence.", "However, such methods may suffer from error propagation induced by entity span detection, high cost due to enumeration of all possible text spans, and omission of inter-dependencies among token labels in a sentence.", "Here we present a simple demonstration-based learning method for NER, which lets the input be prefaced by task demonstrations for in-context learning.", "We perform a systematic study on demonstration strategy regarding what to include (entity examples, with or without surrounding context), how to select the examples, and what templates to use.", "Results on in-domain learning and domain adaptation show that the model's performance in low-resource settings can be largely improved with a suitable demonstration strategy (e.g., 4-17% improvement on 25 train instances).", "We also find that good demonstration can save many labeled examples and consistency in demonstration contributes to better performance.", "1 1 Introduction Neural sequence models have become the de facto approach for named entity recognition (NER) and have achieved state-of-the-art results on various NER benchmarks (Lample et al., 2016; Ma and Hovy, 2016; Liu et al., 2018).", "However, these data-hungry models often rely on large amounts of labeled data manually annotated by human experts, which are expensive and slow to collect (Huang et al., 2020; Ding et al., 2021b), especially for specialized domains ( e.g. , research papers).", "To Authors contributed equally.", "improve NER performance on low-resource (label scarcity) settings, prior works seek auxiliary supervisions, such as entity dictionary (Peng et al., 2019; Shang et al., 2018; Yang et al., 2018; Liu et al., 2019) and labeling rules (Safranchik et al., 2020; Jiang et al., 2020), to either augment human-labeled data with pseudo-labeled data, or incorporate meta information such as explanation (Lin et al., 2020; Lee et al., 2020, 2021), context (Wang et al., 2021), and prompts (Ding et al., 2021a; Cui et al., 2021) to facilitate training.", "However, such methods have the following challenges: (1) human efforts to create auxiliary supervisions (e.g., dictionaries, rules, and explanations); (2) high computational cost to make predictions.", "For example, Ding et al. (2021a) shows effectiveness on entity type prediction given the entity span by constructing a prompt with the structure [entity span] is [MASK]\" . However, when the entity span is not given, cloze-style prompts need to be constructed over all the entity candidates in the sentence with the structure [entity candidate] is [MASK]\" to make a prediction (Cui et al., 2021).", "Such brute-force enumerations are often expensive.", "In this paper, we propose demonstration-based learning (Gao et al., 2021; Liu et al., 2021), a simple-yet-effective way to incorporate automatically constructed auxiliary supervision.", "The idea was originally proposed in prompt-based learning to show some task examples before the cloze-style template so that the model can better understand and predict the masked slot (Gao et al., 2021).", "This paper proposes modified version of demonstration-based learning for NER task.", "Instead of reformatting the NER task into the cloze-style template, we augment the original input instances by appending automatically created task demonstrations and feed them into pre-trained language models (PTLMs) so that the model can output improved token representations by better understandings of the tasks.", "Unlike existing efforts which require additional hu-2687", "man labor to create such auxiliary supervisions, our model can be automatically constructed by picking up proper task examples from the train data.", "Moreover, unlike approaches that need to change the format of token classification into cloze-style mask-filling prediction which can neglect latent relationships among token labels, our approach can be applied to existing token classification module in a plug-and-play manner (See Figure 1", "(a) vs", "(b)).", "We investigate the effectiveness of task demonstration in two different low-resource settings: (1) in-domain setting which is a standard NER benchmark settings where the train and test dataset come from the same domain; and (2) domain-adaptation setting which uses sufficient labeled data in source domain to solve new tasks in a target domain.", "Here, we study which variants of task demonstration are useful to train an accurate and label-efficient NER model and further explore ways to adapt the source model to target domain with a small amount of target data.", "We propose two ways of automatic task demonstration construction: (1) entity-oriented demonstration selects an entity example per entity type from train data to construct the demonstration.", "It allows the model to get a better sense of entity type by showing its entity example; and (2) instance-oriented demonstration retrieves instance example similar to input sentence in train data.", "It allows the model to get a better sense of the task by showing similar instances and their entities.", "We show extensive experimental results on CoNLL03, Ontonotes 5.0 (generic domain), and BC5CDR (biomedical domain) over 3 different templates and 5 selection/retrieval strategies for task demonstrations.", "For entity-oriented demonstration , we present 3 selection strategies to choose appropriate entity example per entity type: (1) random randomly selects entity example per entity type; (2) popular selects the entity example which occurs the most per entity type in the train data; and (3) search selects the entity example per entity type that shows the best performance in the development set.", "And for instance-oriented demonstration , we present 2 retrieval strategies to choose appropriate instance example ( SBERT (Reimers and Gurevych, 2019) vs. BERTScore (Zhang et al., 2020)).", "Our findings include: (1) good demonstration can save many labeled examples to reach a similar level of performance in low-resource settings.", "Our approach consistently outperforms standard fine-tuning by up to 3 points in terms of F1 score (p-value < 0.02); (2) demonstration becomes more effective when we also provide context.", "For example, not only showing Fischler is PER', but also the sentence that contains Fischler' as person, such as France backed Fischler's proposal'; and (3) consistency in demonstration contributes to better performance.", "Our experiments show that using consistent demonstration for all instances rather than varying per instance lead to better performance 2 Related Works NER with additional supervision Recent attempts addressing label scarcity have explored various types of human-curated resources as auxiliary supervision.", "One of the research lines to exploit such auxiliary supervision is distant-supervised learning.", "These methods use entity dictionaries (Peng et al., 2019; Shang et al., 2018; Yang et al., 2018; Liu et al., 2019) or labeling rules (Safranchik et al., 2020; Jiang et al., 2020) to generate noisy-labeled data for learning a NER model.", "Although these approaches largely reduce human efforts in annotation, the cross-entropy loss may make the model be overfitted to the wrongly labeled tokens due to noisy labels (Meng et al., 2021).", "Another line of research is incorporating such auxiliary supervision during training and inference in a setting of supervised learning.", "These approaches usually 2688", "incorporate external information that is encoded including POS labels, syntactic constituents, dependency relations (Nie et al., 2020; Tian et al., 2020), explanations (Lin et al., 2020; Lee et al., 2020, 2021), retrieved context (Wang et al., 2021) and prompts (Ding et al., 2021a; Cui et al., 2021).", "Demonstration-based Learning Providing a few training examples in a natural language prompt has been widely explored in autoregressive LMs (Brown et al., 2020; Zhao et al., 2021).", "Such prompt augmentation is called demonstration-based learning (Gao et al., 2021).", "This is designed to let prompt be prefaced by a few examples before it predicts label words for [MASK] in the cloze-style question.", "Recent works on this research line explore a good selection of training examples (Gao et al., 2021) and permutation of them as demonstration (Kumar and Talukdar, 2021).", "In this section, we introduce basic concepts of named entity recognition, standard fine-tuning for sequence labeling, and domain adaptation for sequence labeling.", "We then formally introduce our goal generating task demonstration and then developing a learning framework that uses them to improve NER models.", "Here, we let x = [ x (1) , x (2) , . . . x ( n ) ] denote the sentence composed of a sequence of n words and y = [ y (1) , y (2) , . . . y ( n ) ] denote the sequence of NER tags.", "The task is to predict the entity tag y ( i ) Y for each word x ( i ) , where Y is a pre-defined set of tags such as {B-PER, I-PER, . . . , O}.", "In standard fine-tuning , NER model M parameterized by is trained to minimize the cross entropy loss over token representations h = [ h (1) , h (2) , . . . h ( n ) ] which are generated from the pre-trained contextualized embedder as follows: L = n (cid:88) i =1 log f i,y i ( h ; ) (1) where f is the model's predicted conditional probability that can be either from linear or CRF layers.", "We let D train and D test denote the labeled train and test dataset, respectively, consisting of { ( x i , y i ) } .", "Here, we expect the number of labeled instances in D train is extremely limited (e.g., N < 50 ).", "Given such small labeled instances, our goal is to train an accurate NER model with task demonstrations compared to standard fine-tuning and show the effectiveness of demonstration-based learning.", "We evaluate the trained models on D test .", "Domain adaptation aims to exploit the abundant data of well-studied source domains to improve the performance in target domains of interest.", "We consider two different settings: (1) label-sharing setting in which the label space L = (cid:8) l 1 , . . . , l | L | (cid:9) (e.g., l i = P ERSON ) of source-domain data S and target-domain data T are equal; (2) labeldifferent setting which L is different.", "In domain adaptation, we first train a model M s on source-domain data S .", "Next, we initialize the weights of the new model M t by weights of M s .", "Here, we can either transfer the whole model weights or only the weights of contextualized embedder from M s to M t .", "Then, we further tune M t on target-domain data T .", "In our preliminary experiments, we find that transferring only the embedder from M s to M t is much more effective than transferring the whole model weights (See first rows in Table 2 and Table 3).", "For this paper, we focus on the effectiveness of our models to adapt to the target domain with a T , for which the number of instances is extremely limited.", "We then 2689", "In this work, we focus on how to create effective task demonstration x to elicit better token representations for x , and then we propose an efficient learning framework that can be improved by the effect of [ x ; x ] .", "This section introduces the concepts of demonstration-based learning , and provides details of the approach.", "Here, we study example sampling strategies and templates to construct the demonstration (Sec 4.1) and how we can train the NER model with the demonstration (Sec 4.2).", "Task demonstration x = [ [SEP] ; x 1 ; ; x l ] is constructed by selecting entity example e or retrieving instance example s from D train ( T train for domain adaptation) and modifying by template T to form x i .", "The demonstration sequence x is then appended to the original input x to create a demonstration-augmented input [ x ; x ] .", "Here, [SEP] in front of x is to separate x and x .", "The key challenge of constructing task demonstration is to choose appropriate e or s and template T that can be helpful to demonstrate how the model should solve the task.", "As shown in Figure 2, we categorize the demonstration into (1) entity-oriented demonstration ; and (2) instance-oriented demonstration by whether we choose e or s respectively, for demonstration.", "Entity-oriented demonstration.", "Given an entity type label set L = (cid:8) l 1 , . . . , l | L | (cid:9) , we select an entity example e per label l from D train .", "Then, we modify it using template T .", "To select e per each l , we first enumerate all the e D train and create a mapping { l i : [ e 1 , . . . , e n ] | l i L } between l and corresponding list of entities.", "Then for each label l , we select e by three selection strategies: (1) random randomly chooses e from the list; (2) popular chooses e that occurs the most frequently in the list; and (3) search conducts grid search over possible entity candidates per label.", "Here, we sample top-k frequent entities per label, and search over combinations of entity candidates ( = k | L | ).", "We find the best combination that maximizes the F1 score on the dev set D dev .", "Here, x i for every x i is different in random while x i for every x i is same in popular and search .", "Instance-oriented demonstration.", "Given an input x , we retrieve an instance example s that is the most relevant to the input from D train .", "Then, we modify the s along with its { e, l } s by template T .", "For retrieval, we present two strategies: (1) SBERT (Reimers and Gurevych, 2019) retrieves semantically similar sentence using pre-trained bi-encoder.", "It produces CLS embeddings independently for an input x and s D train , and compute the cosine similarity between them to rank s D train ; (2) BERTScore (Zhang et al., 2020), which is originally used as a text generation metric, retrieves token-level semantically similar sentence by computing a sum of cosine similarity between token representations of two sentences.", "Since the NER task aims to token classification, sentence-level similarity may retrieve a sentence that is semantically relevant but has no relevant entities.", "Fixed vs Variable demonstration.", "As described in previous sections, the demonstration in some strategies varies per instance while in others it stays fixed globally.", "We can divide the demonstration strategies into two categories: (1) Variable demonstration: random , SBERT , BERTScore (2) Fixed demonstration: popular , search Demonstration template.", "(1) no-context shows selected e per l with a simple template e is l", ".\", without including the spe-2690 cific sentence where the entities show up. Between each pair of ( e, l ) (of different entity labels l ), we concatenate with separator [SEP] . This template is only applied to the entity-oriented demonstration. (2) context in entity-oriented demonstration shows selected e per l along with an instance sentence s that contains e as a type of l . For each triple of ( e, l, s ) , it is modified into s . e is l .\" and concatenated with [SEP] .", "For instance-oriented demonstration, it shows the retrieved instance s along with all the entities mentioned in the sentence e s .", "It is modified into s .", "e 1 is l 1 . . . .", "e n is l n", ".\".", "(3) lexical in entity-oriented demonstration also shows selected e per l along with an instance sentence s .", "But here we only show s , which the entity span e is replaced by its label string l .", "For instance-oriented demonstration, we show retrieved s by replacing e s with the corresponding l .", "We expect such templates can form labeling rules and let the model know how to label the sentence.", "Transformer-based standard fine-tuning for NER first feeds the input sentence x into a transfomer-based PTLMs to get the token representations h .", "The token representations h are fed into a CRF layer to get the conditional probability p ( y | h ) , and the model is trained by minimizing the conditional probability by cross entropy loss: L = n (cid:88) i =1 log p ( y | h ) (2) In our approach, we define a neural network parameterized by that learns from a concatenated input [ x ; x ] .", "For both model training and inference, we feed the input and retrieve the representations: [ h ; h ] = [ h (1) ,...h ( n ) , h (1) ,... h ( n ) ] = embed([ x ; x ]) (3) As shown in Figure 1, we then feed h into the CRF layer to get predictions and train by minimizing the conditional probability p ( y | h ) as Equation 2.", "For domain adaptation, we first train M s with standard fine-tuning.", "Then, transfer the weights of embedder of M s to M t and further fine-tune M t with our approach.", "We consider three NER datasets as target tasks.", "We consider two datasets for a general domain Dataset Label Train Data 25 50 CoNLL03 PER (Person) 16.0 3.52 29.2 4.52 LOC (Location) 15.6 3.92 30.4 4.07 ORG (Organization) 21.8 2.31 32.6 3.77 MISC (Miscellaneous) 11.0 2.52 15.6 2.33 Ontonotes 5.0 PER (Person) 10.8 2.22 21.4 4.02 LOC (Location) 16.0 3.52 25.0 7.32 ORG (Organization) 13.8 3.48 24.2 6.17 MISC (Miscellaneous) 23.8 5.56 62.6 7.93 BC5CDR Disease 25.8 6.01 29.2 4.52 Chemical 51.0 7.49 65.8 7.12 Table 1: Data statistics.", "( CoNLL03 (Tjong Kim Sang, 2002), Ontonotes 5.0 (Weischedel et al., 2013)) and one dataset for a bio-medical domain ( BC5CDR (Li et al., 2016)).", "CoNLL03 is a general domain NER dataset that has 22K sentences containing four types of general named entities: LOCATION , PERSON , ORGANIZATION , and MISCELLANEOUS entities that do not belong in any of the three categories.", "Ontonotes 5.0 is a corpus that has roughly 1.7M words along with integrated annotations of multiple layers of syntactic, semantic, and discourse in the text.", "Named entities in this corpus were tagged with a set of general 18 well-defined proper named entity types.", "We split the data following (Pradhan et al., 2013).", "BC5CDR has 1,500 articles containing 15,935 CHEMICAL and 12,852 DISEASE mentions.", "To show its effectiveness in few-shot NER, we also show baselines of few-shot NER methods NNShot and StructShot (Yang and Katiyar, 2020).", "NNshot is simple token-level nearest neighbor classification system while StructShot extends NNshot with a decoding process using abstract tag transition distribution.", "Here, both the classification model and the transition distribution should be pre-trained on the source dataset.", "Thus, we consider this as domain adaptation setting.", "We implement all the baselines and our frameworks using PyTorch (Paszke et al., 2019) and Hugging-Face (Wolf et al., 2020).", "We set the batch size and learning rate to 4 and 2e-5, respectively, and use bert-base-cased model for all the experiments.", "For each variant, we run 50 epochs over 5 different sub-samples and 3 random seeds with early-stopping 20 and show its average and stan-2691 Demonstration / Method Strategy Template CoNLL03 Ontonotes 5.0 BC5CDR 25 50 25 50 25 50 BERT+CRF w/o demonstration -52.72 2.44 62.75 0.98 38.97 4.62 54.51 3.27 52.56 0.46 60.20 2.01 BERT+CRF w/ SBERT lexical 48.92 2.81 57.68 0.37 36.58 4.61 44.47 2.58 49.41 0.94 51.98 2.14 Instance-oriented demonstration ( variable ) context 53.62 1.64 64.21 1.87 42.18 5.21 53.07 3.46 54.71 2.09 59.78 1.47 BERTScore lexical 49.55 3.18 58.85 1.06 35.42 3.88 44.70 2.41 49.37 0.19 51.61 2.45 ( variable ) context 53.97 1.52 64.66 2.04 37.56 5.29 53.13 3.22 54.81 2.11 59.63 1.94 BERT+CRF w/ random no-context 53.95 1.89 63.31 2.14 42.25 3.61 55.71 3.82 53.58 0.48 59.97 1.89 Entity-oriented demonstration ( variable ) lexical 55.20 2.24 63.60 2.32 44.02 4.73 56.31 3.83 53.79 0.61 59.65 1.71 context 54.84 2.12 63.51 2.83 43.57 3.73 56.76 3.69 54.08 0.97 59.94 1.70 popular no-context 54.34 3.33 64.30 2.76 43.02 4.33 56.65 3.35 53.86 0.86 60.51 1.77 ( fixed ) lexical 56.22 3.88 64.95 2.04 45.31 5.02 58.24 3.17 54.14 0.67 60.67 1.58 context 56.52 3.34 64.47 2.35 45.52 4.69 58.40 3.24 54.31 0.80 61.31 1.51 search no-context 54.63 2.12 64.50 2.76 42.88 5.41 56.96 4.09 53.97 1.32 60.84 2.14 ( fixed ) lexical 56.57 3.61 65.11 2.71 44.87 5.09 58.51 3.42 54.39 1.57 60.76 2.12 context 57.00 4.03 64.82 3.16 45.74 5.57 59.00 3.27 55.83 1.25 62.87 2.41 Table 2: In-domain performance comparison (F1-score) on CoNLL03, Ontonotes 5.0, and BC5CDR by different number of training instances.", "dard deviation of F1 scores.", "Unlike existing sampling methods for few-shot NER (Yang and Kati-yar, 2020), in which the training sample refers to one entity span in a sentence, we consider a real-world setting that humans annotate a sentence.", "We sub-sample data-points by random sampling with a constraint that sampled instances should cover all the BIOES labels (Chiu and Nichols, 2016) in the whole dataset.", "For Ontonotes, we aggregate all other entity types rather than person, location, and organization into miscellaneous to set the label sharing setting for domain adaptation experiments.", "Table 1 presents statistics of average number of entities per entity type over 5 different sub-samples.", "We first compare the overall performance of all baseline models and our proposed framework with the amount of training data 25 and 50 to show the impact of our approach in a low-resource scenario, assuming a task that needs to be annotated from scratch.", "Then, we show performance analysis to show the effectiveness of our approach and whether the model really learns from the demonstration.", "In-domain setting In Table 2, we can observe that most variants of demonstration-based learning consistently and significantly (with p-value < 0.02) outperform the baseline by a margin ranging from 1.5 to 7 F1 score in three low-resource NER datasets (25, 50 train instances respectively).", "It demonstrates the potential of our approach for serving as a plug-and-play method for NER models.", "Domain adaptation setting First, we observe that simple domain adaptation technique can improve the performance (First rows of Table 2 vs. Table 3).", "Here, we only transfer the embedder weights of M s to M t , and we expect the performance gain can be attributed to the embedder of M s , which is trained in task adaptive pre-training manner on NER task formats (Gururangan et al., 2020).", "In Table 3, we can see that the most variants of demonstration-based learning allow the source model M s to be adapted to the target domain in fast with a small amount of target data T , compared to baselines without demonstration including few-shot NER methods.", "Entity vs. Instance-oriented demonstration.", "instance-oriented demonstration performs worse than entity-oriented demonstration due to the diffi-culty of finding an appropriate similar instance in a low resource train data.", "In our analysis, we find that the average cosine similarity between retrieved example s and input x is less than 0.4 which shows many of the retrieved examples are not appropriate similar examples to the input.", "Fixed vs. Variable demonstration.", "As mentioned in section 4.1, random doesn't pick a fixed set of demonstrations the same way as popular and search .", "Instead, it picks random demonstrations for each input instance.", "In a low-resource setting, there are often no significantly popular entities.", "Therefore, the fact that popular outperforms random in our experiments might suggest that the consistency of demonstration selection, rather than popularity of selected entities, is a crucial factor in better few-shot learning.", "To test this, we randomly select one entity per entity type and attach it as the demonstration to all instances, we call it ( fixed random ).", "As shown in Figure 4, it outperforms random and is on par with popular and search .", "We believe this serves as evidence for two hypotheses: (1) consistency of demonstration is essential to performance, and (2) in low-resource settings, the effectiveness of combinations of entities as demonstrations might be a rather random function and not too affected by the combination's collective popularity in the training dataset, which further implies that the idea of search is on the right track.", "Performance in other model variants To show the effectiveness of demonstration-based learning as plug-and-play method, we present performance in other model variants: bert-large-cased , LM Strategy Template In-domain LabelSharing CoNLL03 CoNLL03->Ontonotes 25 50 25 50 BL -52.08 2.02 66.42 2.14 63.50 0.96 70.59 1.16 RB-59.67 4.65 70.17 3.93 68.43 2.09 74.11 1.19 RL -59.15 2.93 71.51 3.44 68.16 2.65 74.45 1.02 BL popular context 57.60 3.37 67.11 2.31 64.09 2.95 70.88 1.09 RB popular context 59.76 4.27 70.21 3.41 69.09 2.63 74.53 1.32 RL popular context 59.99 2.16 72.15 3.81 68.78 2.89 74.93 1.07 Table 4: Performance comparison (F1-score) with various backbone LMs: bert-large-cased (BL) ; roberta-base (RB) ; and roberta-large (RL) .", "roberta-base and roberta-large .", "As shown in Table 4, our method shows consistent improvement over baselines (p-value < 0.05).", "It shows that demonstration-based learning can be applied to any other model variants and output better contextualized representations for NER tasks and show its potential for scalability.", "Effectiveness of search .", "search consistently outperforms all other strategies.", "It shows that not only the entity selection, but also the combination of entity examples per each entity type affects the performance.", "To see whether it consistently outperforms the baseline over various low-resource data points, we show the performance trend of entity-oriented demonstration in Figure 5.", "Templates of entity-oriented demonstration.", "entity-oriented demonstration becomes more effective when not only showing the entity example per each entity type, but also the corresponding instance example as a context.", "context and lexical consistently outperform no-context .", "We explore other templates as well, and these three are the best among them.", "We present details on Appendix A. To see whether the order of entity type in entity-oriented demonstration affects the performance, we present analysis of entity type permutation, e.g., person organization location miscellaneous .", "There is no 2693 25 # of train instances (CoNLL03) 50 52 54 56 58 60 F 1 S c o r e No demonstration Permutation (context) # of train data: 25", "with/without the demonstration (denoted by O\" and X\", respectively) at training and inference time.", "clear pattern of which entity type order is better (spearman correlation between F1-scores over different entity type orders with 25 and 50 training instances < 0), but all the permutations outperform the baseline as shown in Figure 6, which show that demonstration-based learning can be effective regardless of the order (See Appendix Figure 8).", "Demonstration perturbation.", "To investigate whether the model really learns from demonstration, we explore the performance of our approach with perturbed demonstration which selects random entities, labels, and context sentences as demonstration.", "Here, we present two studies: (1) Test perturbation which train with correct demonstration and test with perturbed demonstration; and (2) Train-test perturbation which both train and test with perturbed demonstration.", "Figure 7 shows perturbed demonstration disturbs the model in a large margin for both case.", "This shows that the model affects by demonstration, and proper demonstration can improve the model's performance.", "Full results are available in Appendix Table 9.", "Effects of demonstration in train & inference.", "Table 5 shows the effects of demonstration in training and inference stage.", "A comparison of row 0 with row 3 shows that applying demonstration in the training stage but not in the inference stage would make the model perform worse than the fine-tuning baseline.", "This is another evidence that CoNLL BC5CDR Ontonotes 25 Train Instances 35 40 45 50 55 60 65 F 1 S c o r e Train-test Perturbation (context) Original Perturbation", "Fully supervised setting.", "Table 6 shows the performance in fully supervised setting, where the train data is sufficient.", "We can see that demonstration-based learning yields similar performance as baselines (p-value < 0.1), which shows that demonstrations are rather redundant when data is abundant.", "In this paper, we propose demonstration-based learning for named entity recognition.", "Specifically, we present entity-oriented demonstration and instance-oriented demonstration and show that they successfully guide the model towards better understandings of the task in low-resource settings.", "We observe that entity-oriented demonstration is more effective than instance-oriented demonstration , and search strategy consistently outperforms all other variants.", "Moreover, we find that consistent demonstration for all the instances is crucial to the superior performance of our approach.", "We believe that our work provides valuable cost reduction when domain-expert annotations are too expensive and opens up possibilities for future work in automatic demonstration search for few-shot named entity recognition." ]
[ "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "result", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "result", "result", "method", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "result", "result", "method" ]
[ "NLP community is currently investing a lot more research and resources into development of deep learning models than training data.", "While we have made a lot of progress, it is now clear that our models learn all kinds of spurious patterns, social biases, and annotation artifacts.", "Algorithmic solutions have so far had limited success.", "An alternative that is being actively discussed is more careful design of datasets so as to deliver specific signals.", "This position paper maps out the arguments for and against data curation, and argues that fundamentally the point is moot: curation already is and will be happening, and it is changing the world.", "The question is only how much thought we want to invest into that process.", "The key ingredient behind the recent successes in NLP is Transformer-based language models.", "The paradigm of pre-training followed by fine-tuning on downstream tasks was popularized by BERT (Devlin et al., 2019), and is actively developed (Rogers et al., 2020b).", "In December 2020 the human performance baselines on SuperGLUE (Wang et al., 2019a) were surpassed twice, making the community wonder if it is possible to formulate benchmarks not solvable in this paradigm.", "However, the successes are not the full story.", "It is becoming increasingly clear that much of the remarkable performance is down to benchmarks that do not actually require sophisticated verbal reasoning skills due to annotation artifacts and spurious patterns correlating with the target labels (Guru-rangan et al., 2018; McCoy et al., 2019; Paullada et al.,", "2020).The social biases in NLP models are also attracting more attention (Sheng et al., 2019; Davidson et al., 2019; Hutchinson et al., 2020).", "The garbage in, garbage out\" principle suggests that the situation will not change without a dramatic reappraisal of how NLP data is collected, both for pre-training and task-specific resources. But that seemingly uncontroversial conclusion is at the core of the interdisciplinary tension between NLP understood as a deep learning (DL) application area, and the more qualitative approaches of computational linguistics and AI ethics. How deep that tension goes is illustrated by the recent heated (and sometimes less than professional 1 ) debate around On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? \" by Bender, Gebru et al (2021). This position paper brings together the arguments for and against curating data 2 from linguistic and ethical perspectives (2). It makes the case that curation is unavoidable and already happening, and that any data choices that we make, explicitly or implicitly, will affect the real world (3). Thus the debate is only about how much thought we should put into this process. If we are to at least try to steer it, we have to overcome the interdisciplinary tension and reconsider what counts as NLP work", "(4).", "5 outlines some policies that could help.", "The core argument for active curation/design of the data that goes into NLP models is that the models are representations of the data they were trained on, and thus data work is necessary to make sure that the models can learn what we need them to learn.", "The supporting evidence for this position 1 https://www.theverge.com/22309962/timnit-gebru-google-harassment-campaign-jeff-dean 2 In this paper data curation\" is interpreted broadly as making choices about what should be included in a NLP resource", "(either for pre-training or task-specific data).", "The phenomena to be included/excluded could be defined in terms of what is said", "(e.g. soccer commentary), how it is expressed", "(e.g. with or without expletives), and/or who is speaking or being addressed", "(e.g. teenage soccer fans).", "Our world is far from perfect, and written texts contain plenty of evidence of all kinds of social biases based on gender, race, social status, ability, age, etc.", "Models may learn these biases", "(from pre-training and/or task data)", "and even amplify them, putting the minority groups at a disadvantage by direct psychological harm and propagation of stereotypes", "(Blodgett et al., 2020; Bender et al., 2021).", "In this context, data curation means selecting data based on its sociocultural characteristics", "(Jo and Gebru, 2020).", "Fundamentally, this is about fair representation for different social groups.", "Some dismiss Bender et al.", "(2021)", "as political, or even advocacy rather than research", "(Lissack, 2021).", "However, papers advocate for specific research agendas all the time", "(Venkatasubramanian, 2021).", "NLP in particular has a growing subfield of bias mitigation", "(see e.g. the survey on such work for gender bias by Sun et al.", "(2019))", "that pursues exactly the same social justice agenda, but does not receive the same pushback.", "Models may memorize specific facts in training data, and if those facts happen to be personally identifiable information, this is a security concern.", "For instance, Carlini et al.", "(2020)", "showed that GPT-2 3 was able to memorize personal contact information, even if it only appeared on a few web pages.", "A big problem is that this is not a bug, but a feature: we do want our language models to represent some facts about presidents just not about private citizens.", "Deciding what should not be remembered is clearly a data curation issue.", "2.1.3", "(Lack of)", "progress towards NLU DL models are data-hungry, and so far we have heavily relied on the sources that are easy to scale: web texts for pre-training, and crowdsourcing for annotation or generating shorter texts.", "Combined with most funding and effort allocated to model development, this meant a less clear view of what was in the data.", "Consequently, the recent years witnessed a lot of findings along the following lines.", "3 Google legal department reportedly requested edits to the article by Carlini et al.", "(2020), in particular to avoid mentions of Google technology", "(Dave, 2021).", "DL models learn spurious patterns present in the data.", "These patterns can be the results of the heuristics used by crowd workers", "(Gururan-gan et al., 2018), small samples of workers creating large parts of data with traces", "(Geva et al., 2019), or simply random patterns in the task or pre-training data.", "For example, words like football may frequently occur in abusive tweets, but this should not give the model the idea that all sports fans are violent", "(Wiegand et al., 2019).", "The result is that many current datasets can", "(and do)", "get solved\" with shallow cues such as lexical co-occurrence", "(Jia and Liang, 2017; McCoy et al., 2019).", "The larger the resource, the more difficult it is to avoid them", "(Gardner et al., 2021).", "perturbations.", "ACL 2020 best paper award went to Ribeiro et al.", "(2020)'s demonstration that even the successful, commercially deployed NLP systems cannot handle many core linguistic phenomena like negation.", "Pre-trained language models by themselves do not necessarily cope with them either", "(Ettinger, 2020).", "This suggests that the current resources do not provide the signal to learn the necessary linguistic paradigms.", "DL models struggle to learn rare phenomena.", "Linguistic phenomena generally follow Zipf distribution", "(Zipf, 1945), which means that most of them are rare in naturally occurring data, and thus harder for the models to learn.", "This applies even to the large pre-training datasets.", "For example, Zhang et al.", "(2020)", "compared the learning rates for different linguistic phenomena as RoBERTa was pre-trained on more and more data.", "English irregular verb forms", "(highly frequent)", "were acquired in under 10M of training tokens, but the model struggled with island effects even after 30B tokens.", "Such results suggest that if something needs to be learned, the model needs to be provided with a sufficiently strong signal", "(and it may still fail even then", "(Geiger et al., 2019)).", "The bottom line is that the distributions of linguistic phenomena in the current NLP resources do not seem to provide the signal with which the current models could learn to perform human-level language understanding.", "We do not even know the full spectrum of abilities that would qualify for that.", "Choosing which aspects of a given task", "(or language, in case of pre-training)", "a given resource would teach explicitly is a curation decision.", "A relatively recent development is universal adversarial triggers\": adversarial attacks on the models that modify the textual input in a way that forces the models to always output a certain prediction", "(Wal-lace et al., 2019).", "For example, the authors make a SQuAD-trained reading comprehension model to always predict the answer to kill American people\" for any why-question. This effect is robust and model-independent: i.e. it is the training data that gets hacked\", not the model.", "It is not clear if it is possible to construct a dataset that would not have such vulnerabilities, but common sense suggests that the training data should be curated so as to make them unlikely to occur in the natural distribution of user input.", "So far the fundamental paradigm for NLP work based on machine learning focused on in-distribution evaluation: the test sample would come from the same distribution as the train/validation samples, and the samples would be randomly split.", "Within that paradigm, it is essential that there are no overlaps between training and test data, which is an issue for many current resources", "(Lewis et al., 2021; Emami et al., 2020).", "To do that well, we already have to make decisions about what counts as overlap\", and what should be in the training and testing data.", "For example, in pre-training GPT-3", "(Brown et al., 2020)", "decisions had to be made about which benchmarks would be used for evaluation.", "There was a", "(par-tially successful)", "attempt to simply remove documents with significant overlap with any test examples from the training data, which raises a new issue: if the goal is to train a general-purpose model, what information could we safely exclude from training purely for evaluation purposes?", "Linzen", "(2020)", "suggests switching to out-of-distribution testing: given that the training data is unlikely to faithfully represent a full range of linguistic phenomena, in-distribution evaluation likely overstates how well the model is doing.", "But to do that, we would still need to know what is in\" the training distribution, and what we would be testing.", "To sum up, there are", "(at least)", "4 reasons to make deliberate decisions about what should be included in the training data, so as to create more robust, inclusive, and secure NLP models.", "What are the objections?", "Since this is a position paper arguing that data curation is unavoidable, the arguments against it are presented together with the defense.", "Most of them are applicable to both pre-training and task data", "(except for 2.2.2, which focuses on pre-training).", "In response to Bender et al., Goldberg", "(2021)", "argued that there are valid use cases in which a model of language use should reflect how the language is actually being used\", rather than how we believe it should be used.", "Defense.", "This is a completely valid argument, and what follows is elaboration rather than refutation.", "In linguistic or social science research, it is uncontroversial that if the corpus is a representative sample of the target phenomena, it should not be manipulated.", "If the goal is to model the world-view of Reddit users, the corpus used for training GPT-2", "(comprising articles shared on Reddit)", "is a representative sample.", "Likewise, if the goal is to study social biases, we should not eliminate e.g. racist comments.", "The problem raised by Bender et al.", "(2021)", "is only that resources should be used for what they are: the Reddit users are not a representative sample of the general population, and so GPT-2 is not a general-purpose language model.", "This argument concerns the qualitative studies of the world as it is.", "Most NLP research, however, aims to produce systems that would perform some task.", "In that case the natural distribution may not even be what we want: e.g. if the goal is a question answering system, then the natural\" distribution of questions asked in daily life", "(with most questions about time and weather)", "will not be helpful.", "The developers may also prefer for their systems to be e.g. less racist/sexist than their input data.", "Note that to study the world as it is\" we still have to do a lot more data work than we are currently doing", "(so as to be able to tell whether a given corpus actually represents the target phenomenon).", "An anonymous reviewer of this paper contributed the following argument: the size of the data is so large that, in fact, our training sets are not a sample at all, they are the entire data universe.", "Defense.", "This argument would stand if the data universe\" that we use for training NLP systems were the same as the totality of human speech/writing\".", "It is not, and will hopefully never be, because collecting all speech is problematic for ethical, legal, and practical reasons.", "Anything less than that is a sample.", "Given the existing social structures, no matter how big that sample is, it is not representative due to", "(at least)", "unequal access to technology, unequal possibility to defend one's privacy and copyright, and limited access to the huge volumes of speech produced in the walled garden\" platforms like Facebook.", "The use of uncontrolled samples", "(like the Common-Crawl-based corpora)", "would have to be justified by arguing either that the above types of bias can be safely ignored, or that the benefits outweigh the risks.", "Do we really have to do hard data work, or could there be an algorithmic solution?", "For the problem of rare phenomena", "(2.1.3), there is ongoing work on inductive biases that could help the models learn them", "(McCoy et al., 2020).", "For social issues", "(2.1.1)", "Goldberg", "(2021)", "and Buckman", "(2021)", "similarly suggest that rather than trying to filter out problematic samples", "(hate speech, racial slurs etc.)", "we could use them to build a representation of the undesirable phenomena, and to try to actively identify and filter them out in generation.", "Schick et al.", "(2021)", "propose a method for a generative language model to reduce biases in its output, using self-diagnosis with its own internal knowledge.", "Defense.", "It is entirely possible that algorithmic alternatives could work better than solutions based on data curation.", "Which one will be more successful is an empirical question.", "As of now, it seems that they are complementary rather than mutually exclusive: for example, some specific biases could be handled algorithmically, but data curation could be used to balance the corpus in some other way(s).", "Note that the algorithmic solutions would still require much of the same data work for evaluation purposes : to find out whether a system is effective at filtering out something undesirable or processing some rare pattern, these phenomena have to be identified, a test set has to be constructed, we would need to make sure that it does not overlap with the training data, and ideally to what degree the various aspects of these phenomena are supported by training evidence.", "This is a big part of work that would go into designing a training dataset.", "The history of AI could be viewed as a trajectory towards decreased amount of implicitly injected knowledge.", "The early AI systems were fully driven by carefully constructed rules and ontoloties.", "They were replaced by the statistical approaches, relying on heavy feature engineering.", "The great promise of DL was to stop trying to define everything, and let the machine to identify and leverage patterns from huge datasets: we should stop acting as if our goal is to author extremely elegant theories, and instead embrace complexity and make use of the best ally we have: the unreasonable effectiveness of data\"", "(Halevy et al., 2009).", "And it seems to work: pre-training larger models with more data keeps producing state-of-the-art results", "(Sun et al., 2017; Brown et al., 2020; Fedus et al., 2021).", "Calls for careful construction of datasets are going in the face of that dream.", "We would arguably be even worse off than when we started: at least in the early AI days we only needed to define the phenomenon to be modeled, and now we also have to find hundreds of examples for that phenomenon.", "Defense.", "Disappointing as it is, we have to admit that although deep-learning-based systems are much better than their predecessors, they are still brittle and do not work well outside the range of cases well represented in the training data", "(and even there they may work for the wrong reasons).", "What is more, we are fundamentally no closer to the elusive idea of understanding\" language or its meaningful production", "(Bender and Koller, 2020).", "It is true that we were able to solve chess and Go without expert knowledge", "(Sutton, 2019), but these are closed-world games with a known set of rules describing that world.", "Attempting to do so in the areas that feed from the real social world and impact that world", "(NLP, facial recognition, algorithmic decision-making on loans etc.)", "could amplify undesirable patterns present in the big data.", "As stated in 2.2.3, it is possible that there is an algorithmic approach that will work equally well or better.", "Which one will win is an empirical question.", "As of now, it is fair to say that data curation is at least an alternative to be considered.", "This is not to say that the current technology cannot yield useful solutions.", "The achievements are undeniable: the advances in machine translation, question answering, and dialogue already power better customer service, educate and inform, enable communication and information flow for people who could not afford professional translation.", "There is certainly room for useful research to further improve the current solutions, define new tasks and transfer to new domains and languages, even if no fundamental breakthroughs come any time soon.", "The question is only whether we want to be able to tell in what circumstances our models can be used safely", "(Mitchell et al., 2019).", "If so, that would require more thinking about data.", "As mentioned in 2.1.3, the distribution of language phenomena tends to be Zipfian", "(Zipf, 1945), which means that most phenomena are rare and difficult to learn.", "A perfect dataset would provides a strong signal for each phenomenon that should be learned.", "That's not how language works, so we may never be able to create something like that.", "Balanced datasets are an improvement, but not a solution", "(Wang et al., 2019b; Rogers et al., 2020a).", "Defense.", "The impossibility of perfection does not entail the impossibility of improvement.", "For example, a sentiment analysis system that performs as well as the current systems while handling negation and coreference correctly, and not pre-judging football fans as violent, is a doable next goal.", "Curation means making conscious choices about what to include and what to exclude.", "These are essentially choices about designing a world .", "What linguistic patterns, what concepts, what demographic attributes, what values should that world encode?", "This is a daunting question, requiring a lot of interdisciplinary expertise and impossible to casually address within a small NLP application project.", "Neither social sciences nor linguistics offer a ready set of answers, only things to consider in various contexts.", "The discriminated sub-groups, their values, and underlying social constructs may also differ across communities: e.g. both in India and US there is discrimination based on skin tone, but in the US context it stands for race, and in India it is a proxy for ethnicity, caste and class", "(Sambasivan et al., 2021a).", "Defense.", "This is an entirely valid point, but it is an objection not to data curation per se , but to data curation in a way that would inflict one set of values and linguistic choices on everyone\".", "That is indeed to be avoided at all costs, and there is a real danger of that happening when NLP systems are commercially deployed and widely used, but the data choices behind them are not explicit.", "The position advocated in this paper, as well as by Bender et al.", "(2021), is only that whatever categories and demographics went into the data design, they have to be documented", "(Bender and Friedman, 2018; Gebru et al., 2020)", "and made explicit, so that the users could be informed about what is happening", "(Mitchell et al., 2019).", "Some studies will just use convenience samples, and some will intentionally try to create a representation of a world without racial prejudice or rich with island effects.", "There are valid use cases for both, as long as it is clear who/what is being represented and for what purposes.", "The tide seems to be turning in this direction: since this work was submitted for review, at least two papers came out documenting popular resources for pre-training language models", "(Dodge et al., 2021; Bandy and Vincent, 2021).", "The popular HuggingFace NLP dataset library 4 is also working towards data cards for its resources.", "Documenting the choices made in the dataset design is prerequisite to model cards", "(Mitchell et al., 2019), which could facilitate a healthy interaction between the communities served by the system and the developers of that system.", "It is entirely possible for that interaction happen in a democratic process: the policies could developed, announced and updated based on the evolving user preferences.", "Robustness in handling linguistic and social peculiarities of a given community should be a selling point for a product striving to win that community over: something to compete for and showcase, rather than avoid mentioning.", "When argument 2.2.6 is made, sometimes it seems to rest on the idea that the distributions in our resources objectively reflect the world.", "On that view, the calls to data curation would seem opinionated and unnecessary, if not outright dystopian.", "But the idea that it is possible to work on NLP in the vacuum\", unmarked by linguistic and social categories, is an illusion.", "A decision to use a convenience sample is also a choice, an act of curation.", "Using any data to derive research conclusions or in commercial applications is only safe if we know what/who it represents.", "In cognitive and sociolinguistics, one of the methods of studying the linguistic and conceptual reper-4", "reper-4 https://huggingface.co/docs/datasets/", "toire of a certain individual or a demographic is through collecting a representative corpus of their speech", "(synchronic or diachronic).", "That corpus inevitably reflects a particular world view 5 .", "The differences in these world views are expressed as variation in what kinds of linguistic structures people are likely to use, what they are likely to talk about, what are their presuppositions and social context and stereotypes, to what extent any of that is verbally expressed, etc.", "Some of that variation is idiosyncratic, some attributable to social groups, but even a cursory look at all the variation strongly suggests that there is no language in general\".", "It is still possible to talk about language at a certain level of abstraction", "(e.g. British English\" vs the myriad of UK dialects), but only with a good sample representing all the necessary subsets. For example, it would be wrong to construct a British English\" resource based only on London samples, because they do not represent the rest of the country", "(either linguistically or socioeconomically).", "A major achievement of corpus linguists are the national corpora\" such as BNC", "(Leech, 1992), painstakingly created to represent a diverse sample of written and spoken genres in a certain geographical region in a certain timeframe, so as to enable studies of that specific variety of language.", "Creating such corpora involves careful sampling, detailed documentation of the domains and speakers that were represented, and much negotiation with publishers for copyright exceptions.", "A typical corpus for training language models, or really any NLP dataset, is likewise a sample of speech of a certain group of people, who have their linguistic preferences and sets of values.", "Consequently, that sample, whether it is coherent or not, and whether it was collected with any specific intentions, represents a certain picture of the world\". Moreover, the purpose of using this data for training is to create a system that would encode that view of the world and make predictions consistent with it. But a typical NLP dataset 6 currently has few specifications of the demographics, dialects, or the range of domains and linguistic phenomena it covers. Unfortunately, it does not mean that the 5 This is a key concept in the works of Neo-Humboldtian scholars: world image", "(Weltbild)", "of Weisgerber, naive picture of the world", "(naivnaja kartina mira)", "of Apresyan", "(1995), and many others.", "6 Corpora generated on crowd worker platforms such as Amazon Mechanical Turk typically impose geographic restrictions, such as location in US or Canada\", but there is no guarantee that the recruited workers are even native speakers. result is some abstract standard\" or neutral\" language.", "It is some kind of interpolation from the mixture of signals in the data that we have very little idea about.", "Why does it matter?", "The linguistic and conceptual repertoire of humans is dynamic.", "Our vocabulary, grammar, style, cultural competence change as we go on with our lives, encounter new concepts, forget some things and reinforce others.", "A key part of that change is the linguistic signals we encounter in communication: on the nativist account children have innate constraints that guide 7 their learning from the data they encounter", "(Chom-sky, 2014; Hornstein and Lightfoot, 1985), and on the usage-based accounts", "(Bybee, 2006; Lieven and Tomasello, 2008)", "that process is entirely data-driven.", "Humans can learn the meaning of words from a single exposure", "(Carey and Bartlett, 1978; Borovsky et al., 2010), but there is also robust evidence of frequency effects in language acquisition", "(Ambridge et al., 2015; Diessel and Hilpert, 2016, May 09).", "It is not by accident that the frequency of the vocabulary to be learned is a key variable in language pedagogy", "(Zahar et al., 2001).", "In short, humans, like DL models, learn from the patterns in the speech that they encounter.", "And those patterns do not have to come from human speakers anymore: much speech that we will encounter in the future is likely to be synthetic.", "According to Pilipiszyn", "(2021), GPT-3 is already generating 4.5B words per day in applications such as question answering, summarization, interactive games, and customer support.", "This cannot but have impact back on the human speakers 8 in the following ways: An NLP system generating text contributes to a human learner's input in the same way as human writers, and probably also speakers", "(but potentially on a much larger scale).", "An NLP system that processes human input to answer questions, translate, perform assisting actions etc. has both direct impact", "(as a lan-7 The radical\" nativist position would be that knowledge of language is entirely innate and is not affected by what the children observe, but on that position we would have to claim the innate knowledge of the word carburetor\"", "(Knight, 2018).", "8 Synthetic speech will also clearly have impact on the future models if it seeps into the training data.", "There is research on watermarking generated text", "(Venugopal et al., 2011; Abdelnabi and Fritz, 2021), but it is not clear what, if anything, the currently deployed systems are doing in this regard.", "There is at least one documented case of GPT-3 used to post on Reddit as if it were a human user", "(Philip, 2020).", "guage model above), and an indirect impact: as these systems become more widespread, the kind of language that they can and cannot successfully interpret will be respectively reinforced or made less prominent.", "An NLP system that makes decisions in processing applications, grading student work, curating news feeds, summarizing papers and emails, recommending content has the potential of making long-lasting impact on the lives of its users, and the kinds of language that it can process successfully clearly play a role.", "The point to take from all of this is that any mismatch of linguistic and social feature distributions between NLP systems and their users will have some impact on the world, and for the commercial, widely used NLP systems that impact may be significant.", "So the debate is not about whether we should change the world by making choices about the data: this is happening either way, because even our convenience samples still reflect numerous implicit choices.", "The debate is only about how much thinking we want to invest into changing our world.", "This thought is somewhat scary", "(in what way will children growing up with Alexa be different?), but also exciting: the educational opportunities alone could be breathtaking, reaching far beyond the students who are already in a good position to do well in school.", "We could also create something simplistic, uninspiring, mindlessly entertaining, and/or not-inclusive.", "That choice is ours.", "To sum up the above discussion: there are no neu-tral\", one-size-fits-all textual corpora. There is also no manual that would provide foolproof instructions for collecting a correct\" corpus for any given context. And all of these complications are not even the main problem, right? After all, data only serves the task of creating a model, which is the real contribution of an NLP paper?", "In theory, the field of NLP is interdisciplinary. In practice, it became something closer to one of the applied areas of machine learning\" rather than computational linguistics\". Furthermore, at least as far as graduate students are concerned, it is something performed as an academic exercise, and as such it does not really have to concern itself with its possible effects on the world.", "already hard enough. Most DL practitioners have neither the training nor time to also do the data work at the level that the linguists and ethicists are calling for. The publication system does not provide the right incentives for that either: modeling NLP work is prestigious and welcomed at top conferences, while data work is janitorial\", less well paid, under-valued and de-glamorised 9", "(Sambasivan et al., 2021b).", "It does not help that there seems to be a systematic miscommunication between the fields.", "When linguists or ethicists talk about the issues with the current solutions, the practitioners may take it as an accusation that they are not doing a good job, rather than as an invitation to improve things together.", "Likewise, when the practitioners propose new systems, the linguists and ethicists may be frustrated: not by the incremental improvements on leaderboards as such, but by lack of accompanying discussion of what the proposed methods are supposed to do better, and for whom.", "Step", "1. Understand each other better.", "The fact is, the AI ethics people are not really out to can-cel\" everybody. It is easy to see why they would be frustrated that the social justice issues have never been a priority, terrified at what move fast & break things\" has already done with the social world, and dubious that they just need to wait and change would come.", "The linguists are not completely useless.", "Chances are, many problems that the DL engineers are having could be fixed if someone was just around to realize that the tokenizer didn't handle the suffixes well.", "And the engineers are not inherently evil.", "They just need resources, training, collaborators, time, and better research incentives.", "Instead, they have to churn out papers in 2 months just to stay in the publication race, with no time to dive deeper into what their systems are actually doing.", "9 Of course, this perception is not universal, and there are", "(very few)", "unicorn\" resources like SQuAD (Rajpurkar et al., 2016) that highly influenced the field. But overall the power balance in the field is currently not in favor of resource work. the current situation is through conferences. There will be a lot more interest in data work if it becomes more publishable. As of now, the resources and evaluation track is something of a poor relative to the machine learning track, which in ACL 2020 10 attracted nearly 3 times more submissions. Most task-specific tracks (question answering, summarization, dialogue etc.) are supposed to receive both engineering and data submissions, but in that setting the interdisciplinary tension may lead to resource papers voted down simply for being resource papers (Rogers and Augenstein, 2020). Bawden (2019) cites an ACL 2019 reviewer who complained that the paper is mostly a description of the corpus and its collection and contains little scientific contribution\".", "We really need to take the type of contribution 11 into account in reviewer assignment, into review form design, and into reviewer training programs.", "We also need to make sure that the resource tracks are consistently offered 12 , with dedicated best paper awards to raise the prestige of this work in the community.", "Some conferences already started to provide reviewer mentoring, double down on ethics, consider what signal they send to companies and students by their best paper awards.", "We can all help by lobbying program chairs whenever we have a chance, offline and online.", "A helpful factor is that the ever-increasing size of models is making the state-of-the-art leaderboard chase financially untenable for even well-resourced labs, and they are looking for other outlets.", "This is a chance for the NLP community to engage more deeply with the phenomena that we are modeling.", "Step", "3. Educate.", "The idea that NLP\" means deep learning\" may well arise if it is taught as a one-semester course focusing on the engineering. If the coursework is fully powered by existing resources, it creates the impression that data is not a part of the job. The result is that the students learn that it is entirely possible to just run off-the-shelf parsers without knowing anything about syntax, or do sentiment analysis without knowing anything about pragmatics. And if it is possible to not do more work, why would anyone bother? We need to provide our students with the skills 10 https://www.aclweb.org/adminwiki/ images/9/90/ACL_Program_Co-Chairs_Report_July_2020.pdf 11 As was done e.g. at COLING 2018: http:// coling2018.org/paper-types/ 12 E.g. this track was recently absent at EMNLP 2020. to stress-test their systems and critically examine their data, so as to be able to spot potential issues early on. For that, they will need the basic linguistic theory, the fundamentals of sociolinguistics and pragmatics. Likewise, some aspects of psychology (dual processing theories, memory and attention span, cognitive biases, nudging\") are a pre-requisite for designing interfaces not only for annotation projects, but for any kind of interactive NLP systems. And some awareness of the social power structures would help in not propagating the harmful stereotypes. Some strategies for building NLP curricula have been discussed at the Teach-ingNLP workshop (Radev and Brew, 2002; Brew and Radev, 2005; Palmer et al., 2008; Derzhanski and Radev, 2013; Jurgens et al., 2021). Most importantly, NLP courses need to combat the idea that all the knowledge about the human world is just irrelevant in the age of big data and DL. The garbage in, garbage out principle is still relevant.", "We may be able to sort the garbage and learn from it anyway, but only if we have at least some idea about what kind of garbage we have.", "Step", "4. Collaborate.", "Large companies and universities provide a significant competitive edge to their authors just in virtue of the in-house collaboration networks they could offer.", "But it is becoming increasingly easy for everyone to find external collaborations, especially in the world in pandemic lockdown.", "One opportunity is Twitter, used by estimated 40% of EMNLP 2020 authors 13 .", "What would it mean to collaborate\"?", "At the bare minimum, in an engineering project the linguists and social scientists could help to at least try to characterize the data that was used with something like data statements (Bender and Friedman, 2018; Gebru et al., 2020).", "A more ambitious goal would be to involve them early on in the data selection, preparation, and iterative development.", "Ideally, there would be joint formulation of research goals, thinking together about what kind of world we are building.", "Finding collaborators is much easier for established researchers, not only because they are a known quantity, but also because they are already aware of what could be done in an interdisciplinary project.", "They probably even already know the people who they could ask to join.", "But the students could use some help, especially those from the less well-connected institutions.", "They could bene-13 Source: EMNLP 2020 organizers.", "fit from establishing some kind of skill exchange network, where the students with engineering background could help out in data projects and students with linguistics/social science background could help out in engineering projects.", "This would probably the best way to ease the interdisciplinary tension, instill respect for each other's expertise, as well as the awareness that NLP is a huge problem that we do not even understand that well, and for which we need all the help we can get.", "Step", "5. Estimate.", "The goal of all the above data work is ultimately to enable informed decisions by the public, the CEOs, and the policy makers about what kind of world we would live in.", "One takeaway from the heated debate around (Bender et al., 2021) is that if one side in an interdisciplinary debate focuses mostly on the potential benefits of something, and the other mostly on its harms, the stance is likely to become adversarial, and we do not give each other the benefit of the doubt 14 .", "Nevertheless, the people on both sides of the debate are researchers, and they want to make informed decisions.", "That is only possible through cost-benefit analysis.", "It is clear that the first step has to be thorough documentation of the data (Ben-der and Friedman, 2018; Gebru et al., 2020): this lets us compare the represented population and the population of the target users, and think through the possible harms.", "However, it is not clear how to weigh the harms against the benefits.", "Which population will get exposed to the proposed tech?", "What are the direct and indirect benefits on the user population?", "What are the direct and indirect harms on the population in general (not limited to the users of the proposed tech), in particular the marginalized groups?", "If certain harms are inflicted on the user population, would they have the political/legal recourse to be compensated?", "How compute-efficient the implementation would be, how would the energy be sourced, and would that affect any other populations?", "1354814467633111048 15 Many of these points are made in the NAACL ethics FAQ https://2021.aclweb.org/ethics/ Ethics-FAQ/", "How widely would it be eventually adopted, and how that changes the likelihood of benefits and harms to different user groups?", "What is the potential for further innovation that would significantly change the appeal, de-ployability or risks of the proposed solution?", "What are the risks of human error and deliberate misuse if the tech is stolen/replicated by terrorists, authoritarian governments, propa-ganda organizations and other bad actors?", "Unfortunately, the world is volatile and business plans change all the time.", "There is so much uncertainty for each of these points that it is not clear how to even start.", "Yet we have to try to come up with a process for working these things out, and eventually develop templates and calculators that developers could use to make estimates for best-, worstand realistic scenarios.", "This is an area in which NLP is desperately in need of collaboration with economics, governance and law.", "In that, again, NLP conferences could take the lead.", "There could be regular tracks that would incentivize joint publications with experts from these fields.", "The search for solutions is already going on, but this way NLP community would participate in it rather than just meet with regulation post-factum.", "To be able to provide meaningful peer review for such work, we would need a mechanism of recruiting external reviewers with the required expertise on as-need basis.", "Our data is already changing the world, and will keep doing so whether we are being intentional about it or not.", "We might as well at least try: we do want more robust and linguistically capable models, and we do want models that do not leak sensitive data or propagate harmful stereotypes.", "Whether those goals would be ultimately achieved by curating large corpora or by more algorithmic solutions, in both cases we need to do a lot more data work.", "The current dynamic suggests that this won't happen, unless we overcome the interdisciplinary tensions and turn our conferences into truly shared spaces.", "Many thanks to Emily M. Bender, Yoav Goldberg, Ryan Cotterell, and the anonymous reviewers for their thoughtful comments on this paper." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "other" ]
[ "Text-based adventure games provide a platform on which to explore reinforcement learning in the context of a combinatorial action space, such as natural language.", "We present a deep reinforcement learning architecture that represents the game state as a knowledge graph which is learned during exploration.", "This graph is used to prune the action space, enabling more efficient exploration.", "The question of which action to take can be reduced to a question-answering task, a form of transfer learning that pre-trains certain parts of our architecture.", "In experiments using the TextWorld framework, we show that our proposed technique can learn a control policy faster than baseline alternatives.", "We have also open-sourced our code at https://github.com/rajammanabrolu/KG-DQN.", "Natural language communication can be used to affect change in the real world.", "Text adventure games, in which players must make sense of the world through text descriptions and declare actions through natural language, can provide a stepping stone toward more real-world environments where agents must communicate to understand the state of the world and indirectly affect change in the world.", "Text adventure games are also useful for developing and testing reinforcement learning algorithms that must deal with the partial observability of the world (Narasimhan et al., 2015; He et al., 2016).", "In text adventure games, the agent receives an incomplete textual description of the current state of the world.", "From this information, and previous interactions with the world, a player must determine the next best action to take to achieve some quest or goal.", "The player must then compose a textual description of the action they intend to make and receive textual feedback of the effects of the action.", "Formally, a text-based game is a partially observable Markov decision process (POMDP), represented as a 7-tuple of (cid:104) S, T, A, , O, R, (cid:105) representing the set of environment states, conditional transition probabilities between states, words used to compose text commands, observations, observation conditional probabilities, reward function, and the discount factor respectively (Cote et al., 2018).", "In text-based games, the agent never has access to the true underlying world state and has to reason about how to act in the world based only on the textual observations.", "Additionally, the agent's actions must be expressed through natural language commands, ensuring that the action space is combinatorially large.", "Thus, text-based games pose a different set of challenges than traditional video games.", "Text-based games require a greater understanding of previous context to be able to explore the state-action space more effectively.", "Such games have historically proven to be difficult to play for AI agents, and the more complex variants such as Zork still remain firmly out of the reach of existing approaches.", "We introduce three contributions to text-based game playing to deal with the combinatorially large state and action spaces.", "First, we show that a state representation in the form of a knowledge graph gives us the ability to effectively prune an action space.", "A knowledge graph captures the relationships between entities as a directed graph.", "The knowledge graph provides a persistent mem-ory of the world over time and enables the agent to have a prior notion of what actions it should not take at a particular stage of the game.", "Our second contribution is a deep reinforcement learning architecture, Knowledge Graph DQN (KG-DQN), that effectively uses this state representation to estimate the Q -value for a state-action pair.", "This architecture leverages recent advances in graph embedding and attention techniques (Guan et al., 2018; Velickovic et al., 2018) to learn which portions of the graph to pay attention to given an input state description in addition to having a mechanism that allows for natural language action inputs.", "Finally, we take initial steps toward framing the POMDP as a question-answering (QA) problem wherein a knowledge-graph can be used to not only prune actions but to answer the question of what action is most appropriate.", "Previous work has shown that many NLP tasks can be framed as instances of question-answering and that we can transfer knowledge between these tasks (McCann et al., 2017).", "We show how pre-training certain parts of our KG-DQN network using existing QA methods improves performance and allows knowledge to be transferred from different games.", "We provide results on ablative experiments comparing our knowledge-graph based approach approaches to strong baselines.", "Results show that incorporating a knowledge-graph into a reinforcement learning agent results in converges to the highest reward more than 40% faster than the best baseline.", "With pre-training using a question-answering paradigm, we achieve this fast convergence rate while also achieving high quality quest solutions as measured by the number of steps required to complete the quests.", "A growing body of research has explored the challenges associated with text-based games (Bordes et al., 2010; Narasimhan et al., 2015; He et al., 2016; Fulda et al., 2017; Haroush et al., 2018; Cote et al., 2018; Tao et al., 2018).", "Narasimhan et al. (2015) attempts to solve parser-based text games by encoding the observations using an LSTM.", "This encoding vector is then used by an action scoring network that determines the scores for the action verb and each of the corresponding argument objects.", "The two scores are then averaged to determine Q -value for the state-action pair.", "He et al. (2016) present the Deep Reinforcement Relevance Network (DRRN) which uses two separate deep neural networks to encode the state and actions.", "The Q -value for a state-action pair is then computed by a pairwise interaction function between the two encoded representations.", "Both of these methods are not conditioned on previous observations and so are at a disadvantage when dealing with complex partially observable games.", "Additionally, neither of these approaches prune the action space and so end up wasting trials exploring state-action pairs that are likely to have low Q values, likely leading to slower convergence times for combinatorially large action spaces.", "Haroush et al. (2018) introduce the Action Eliminating Network (AEN) that attempts to restrict the actions in each state to the topk most likely ones, using the emulator's feedback.", "The network learns which actions should not be taken given a particular state.", "Their work shows that reducing the size of the action space allows for more effective exploration, leading to better performance.", "Their network is also not conditioned on previous observations.", "Knowledge graphs have been demonstrated to improve natural language understanding in other domains outside of text adventure games.", "For example, Guan et al. (2018) use commonsense knowledge graphs such as ConceptNet (Speer and Havasi, 2012) to significantly improve the ability of neural networks to predict the end of a story.", "They represent the graph in terms of a knowledge context vector using features from ConceptNet and graph attention (Velickovic et al., 2018).", "The state representation that we have chosen as well as our method of action pruning builds on the strengths of existing approaches while simultaneously avoiding the shortcomings of ineffective exploration and lack of long-term context.", "In this section we introduce our knowledge graph representation, action pruning and deep Q network architecture.", "In our approach, our agent learns a knowledge graph, stored as a set of RDF triples, i.e. 3-tuples of (cid:104) subject, relation, object (cid:105) .", "These triples are extracted from the observations using Stanford's Open Information Extraction (OpenIE) (Angeli et al., 2015).", "OpenIE is not optimized to the regularities of text adventure games and there are a lot of relations that can be inferred from the typical structure of descriptive texts.", "For example, from a phrase such as There is an exit to the north one can infer a has relation between the current Figure 1: Graph state update example given two observations location and the direction of the exit.", "These additional rules fill in the information not provided by OpenIE.", "The resultant knowledge graph gives the agent what essentially amounts to a mental map of the game world.", "The knowledge graph is updated after every agent action (see Figure 1).", "The update rules are defined such that there are portions of the graph offering short and long-term context.", "A special nodedesignated yourepresents the agent and relations out of this node are updated after every action with the exception of relations de-noting the agent's inventory.", "Other relations persist after each action.", "We intend for the update rules to be applied to text-based games in different domains and so only hand-craft a minimal set of rules that we believe apply generally.", "They are: Linking the current room type (e.g. base-ment, chamber') to the items found in the room with the relation has, e.g. (cid:104) chamber, has, bed stand (cid:105) Extracting information regarding entrances and exits and linking them to the current room, e.g. (cid:104) basement, has, exit to north (cid:105) Removing all relations relating to the you node with the exception of inventory every action, e.g. (cid:104) you, have, cubical key (cid:105) Linking rooms with directions based on the action taken to move between the rooms, e.g. (cid:104) chamber, east of, basement (cid:105) after the action go east is taken to go from the basement to the chamber All other RDF triples generated are taken from OpenIE.", "The number of actions available to an agent in a text adventure game can be quite large: A = O ( | V | | O | 2 ) where V is the number of action verbs, and O is the number of distinct objects in the world that the agent can interact with, assuming that verbs can take two arguments.", "Some actions, such as movement, inspecting inventory, or observing the room, do not have arguments.", "The knowledge graph is used to prune the combinatorially large space of possible actions available to the agent as follows.", "Given the current state graph representation G t , the action space is pruned by ranking the full set of actions and selecting the topk .", "Our action scoring function is: +1 for each object in the action that is present in the graph; and +1 if there exists a valid directed path between the two objects in the graph.", "We assume that each action has at most two objects (for example inserting a key in a lock).", "Following Narasimhan et al. (2015), all actions A that will be accepted by the game's parser are available to the agent at all times.", "When playing the game, the agent chooses an action and receives an observation o t from the simulator, which is a textual description of current game state.", "The state graph G t is updated according to the given observation, as described in Section 3.1.", "We use the Q -Learning technique (Watkins and Dayan, 1992) to learn a control policy ( a t | s t ) , a t A , which gives us the probability of taking action a t given the current state s t .", "The policy is determined by the Q -value of a particular state-action pair, which is updated using the Bellman equation (Sutton and Barto, 2018): Q t +1 ( s t +1 ,a t +1 ) = E [ r t +1 + max a A t Q t ( s, a ) | s t , a t ] (1) where refers to the discount factor and r t +1 is the observed reward.", "The policy is thus to take the action that maximizes the Q -value in a particular state, which will correspond to the action that maximizes the reward expectation given that the agent has taken action a t at the current state s t and followed the policy ( a | s ) after.", "The architecture in Figure 2 is responsible for computing the representations for both the state s t and the actions a ( i ) A and coming to an estimation of the Q -value for a particular state and action.", "During the forward activation, the agent uses the observation to update the graph G t using the rules outlined in Section 3.2.", "The graph is then embedded into a single vector g t .", "We use Graph Attention (Velickovic et al., 2018) with an attention mechanism similar to that described in Bahdanau et al. (2014).", "Formally, the Multi-headed Graph Attention component receives a set of node features H = { h 1 , h 2 , . . . , h N } , h i IRF , where N is the number of nodes and F the number of features in each node, and the adjacency matrix of G t .", "Each of the node features consist of the averaged word embeddings for the tokens in that node, as determined by the preceding graph embedding layer.", "The attention mechanism is set up using self-attention on the nodes after a learnable linear transformation W IR 2F F applied to all the node features: e ij = LeakyReLU ( p W ( h i h j )) (2) where p IR 2F is a learnable parameter.", "The attention coefficients ij are then computed by normalizing over the choices of k N using the soft-max function.", "Here N refers to the neighborhood in which we compute the attention coefficients.", "This is determined by the adjacency matrix for G t and consists of all third-order neighbors of a particular node.", "Multi-head attention is then used, calculating multiple independent attention coefficients.", "The resulting features are then concatenated and passed into a linear layer to determine g t : g t = f ( W g ( (cid:107) Kk =1 ( (cid:88) j N ( k ) ij W ( k ) h j ))+ b g ) (4) where k refers to the parameters of the k th independent attention mechanism, W g and b g the weights and biases of this component's output linear layer, and (cid:107) represents concatenation.", "Simultaneously, an encoded representation of the observation o t is computed using a Sliding Bidirectional LSTM (SB-LSTM).", "The final state representation s t is computed as: s t = f ( W l ( g t o t ) + b l ) (5) where W l , b l represent the final linear layer's weights and biases and o t is the result of encoding the observation with the SB-LSTM.", "The entire set of possible actions A is pruned by scoring each a A according to the mechanism previously described using the newly updated G t +1 .", "We then embed and encode all of these action strings using an LSTM encoder (Sutskever et al., 2014).", "The dashed lines in Figure 2 denotes non-differentiable processes.", "The final Q -value for a state-action pair is: Q ( s t , a t ) = s t a t (6) This method of separately computing the representations for the state and action is similar to the approach taken in the DRRN (He et al., 2016).", "We train the network using experience replay (Lin, 1993) with prioritized sampling", "(cf., (Moore and Atkeson, 1993)) and a modified version of the (cid:15) -greedy algorithm (Sutton and Barto, 2018) that we call the (cid:15) 1 , (cid:15) 2 -greedy learning algorithm.", "The experience replay strategy finds paths in the game, which are then stored as transition tuples in a experience replay buffer D .", "The (cid:15) 1 , (cid:15) 2 -greedy algorithm explores by choosing actions randomly from A with probability (cid:15) 1 and from A t with a probability (cid:15) 2 .", "The second threshold is needed to account for situations where an action must be chosen to advance the quest for which the agent has no prior in G t .", "That is, action pruning may remove actions essential to quest completion because those actions involve combinations of entities that have not been encountered before.", "Replay sampling from D is done by sampling a fraction from transition tuples with a positive reward and 1 from the rest.", "As shown in (Narasimhan et al., 2015), prioritized sampling from experiences with a positive reward helps the deep Q -network more easily find the sparse set of transitions that advance the game.", "The exact training mechanism is described in Algorithm", "1. 4 Game Play as Question Answering Previous work has shown that many NLP tasks can be framed as instances of question-answering and that in doing so, one can transfer knowledge between these tasks (McCann et al., 2017).", "In the abstract, an agent playing a text adventure game can be thought of as continuously asking the question What is the right action to perform in this situa-tion?", "When appropriately trained, the agent may be able to answer the question for itself and select a good next move to execute.", "Treating the problem as question-answering will not replace the need for exploration in text-adventure games.", "However, we hypothesize that it will cut down on the amount of exploration needed during testing time, theoretically allowing it to complete quests faster; one of the challenges of text adventure games is that the quests are puzzles and even after training, execution of the policy requires a significant amount of exploration.", "To teach the agent to answer the question of what action is best to take given an observation, we use an offline, pre-training approach.", "The data for the pre-training approach is generated using an oracle, an agent capable of finishing a game perfectly in the least number of steps possible.", "Specifically, the agent knows exactly what action to take given the state observation in order to advance the game in the most optimal manner possible.", "Through this process, we generate a set of traces consisting of state observations and actions such that the state observation provides the context for the implicit question of What action should be", "taken? and the oracle's correct action is the answer.", "We then use the DrQA (Chen et al., 2017) question-answering technique to train a paired question encoder and an answer encoder that together predict the answer (action) from the question (text observation).", "The weights from the SB-LSTM in the document encoder in the DrQA system are then used to initialize the weights of the SB-LSTM.", "Similarly, embedding layers of both the graph and the LSTM action encoder are initialized with the weights from the embedding layer of same document encoder.", "Since the DrQA embedding layers are initialized with GloVe, we are transferring word embeddings that are tuned during the training of the QA architecture.", "The game traces used to train the question-answering come from a set of games of the same domain but have different specific configurations of the environment and different quests.", "We use the TextWorld framework (Cote et al., 2018), which uses a grammar to generate random worlds and quests.", "The types of rooms are the same, but their relative spatial configuration, the types of objects, and the specific sequence of actions needed to complete the quest are different each time.", "This Small Large Rooms 10 20 Total objects 20 40 Quest length 5 10 Branching factor 143 562 Vocab size 746 819 Average words per obs.", "means that the agent cannot simply memorize quests.", "For pre-training to work, the agent must develop a general question-answering competence that can transfer to new quests.", "Our approach to question-answering in the context of text adventure game playing thus represents a form of transfer learning.", "We conducted experiments in the TextWorld framework (Cote et al., 2018) using their home theme.", "TextWorld uses a grammar to randomly generate game worlds and quests with given parameters.", "Games generated with TextWorld start with a zero-th observation that gives instructions for the quest; we do not allow our agent to access this information.", "The TextWorld API also provides a list of admissible actions at each statethe actions that can be performed based on the objects that are present.", "We do not allow our agent to access the admissible actions.", "We generated two sets of games with different random seeds, representing different game diffi-culties, which we denote as small and large .", "Small games have ten rooms and quests of length five and large games have twenty rooms and quests of length ten.", "Statistics on the games are given in Table", "1. Quest length refers to the number of actions that the agent is required to perform in order to finish the quest; more actions are typically necessary to move around the environment and find the objects that need to be interacted with.", "The branching factor is the size of the action set A for that particular game.", "The reward function provided by TextWorld is as follows: +1 for each action taken that moves the agent closer to finishing the quest; -1 for each action taken that extends the minimum number of steps needed to finish the quest from the current stage; 0 for all other situations.", "The maximum achievable reward for the small and large sets of games are 5 and 10 respectively.", "This allows for EM Precision Recall F1 Small 46.20 56.57 63.38 57.94 Large 34.13 52.53 64.72 55.06 Table 2: Pre-training accuracy.", "a large amount of variance in quest qualityas measured by steps to complete the questthat receives maximum reward.", "The following procedure for pre-training was done separately for each set of games.", "Pre-training of the SB-LSTM within the question-answering architecture is conducted by generating 200 games from the same TextWorld theme.", "The QA system was then trained on data from walkthroughs of a randomly-chosen subset of 160 of these generated games, tuned on a dev set of 20 games, and evaluated on the held-out set of 20 games.", "Table 2 provides details on the Exact Match (EM), precision, recall, and F1 scores of the QA system after training for the small and large sets of games.", "Precision, recall, and F1 scores are calculated by counting the number of tokens between the predicted answer and ground truth.", "An Exact Match is when the entire predicted answer matches with the ground truth.", "This score is used to tune the model based on the dev set of games.", "A random game was chosen from the test-set of games and used as the environment for the agent to train its deep Q -network on.", "Thus, at no time did the QA system see the final testing game prior to the training of the KG-DQN network.", "We compare our technique to three baselines: Random command, which samples from the list of admissible actions returned by the TextWorld simulator at each step.", "LSTM-DQN, developed by Narasimhan et al. (2015).", "Bag-of-Words DQN, which uses a bag-of-words encoding with a multi-layer feed forward network instead of an LSTM.", "To achieve the most competitive baselines, we used a randomized grid search to choose the best hyperparameters (e.g., hidden state size, , , final (cid:15) , update frequency, learning rate, replay buffer size) for the BOW-DQN and LSTM-DQN baselines.", "We tested three versions of our KG-DQN:", "Our models use 50-dimensional word embeddings, 2 heads on the graph attention layers, mini-batch size of 16, and perform a gradient descent update every 5 steps taken by the agent.", "All models are evaluated by observing the", "(a) time to reward convergence, and", "(b) the average number of steps required for the agent to finish the game with (cid:15) = 0 .", "1 over 5 episodes after training has completed.", "Following Narasimhan et al. (2015) we set (cid:15) to a non-zero value because text adventure games, by nature, require exploration to complete the quests.", "All results are reported based on multiple independent trials.", "For the large set of games, we only perform experiments on the best performing models found in the small set of games.", "Also note that for experiments on large games, we do not display the entire learning curve for the LSTM-DQN baseline, as it converges significantly more slowly than KG-DQN.", "We run each experiment 5 times and average the results.", "Additionally, human performance on the both the games was measured by counting the number of steps taken to finish the game, with and without instructions on the exact quest.", "We modified Textworld to give the human players reward feedback in the form of a score, the reward function itself is identical to that received by the deep reinforcement learning agents.", "In one variation of this experiment, the human was given instructions on the potential sequence of steps that are required to finish the game in addition to the reward in the form of a score and in the other variation, the human received no instructions.", "Recall that the number of steps required to finish the game for the oracle agent is 5 and 10 for the small and large maps respectively.", "It is impossible to achieve this ideal performance due to the structure of the quest.", "The player needs to interact with objects and explore the environment in order to figure out the exact sequence of actions required to finish the quest.", "To help benchmark our agent's performance, we observed people unaffiliated with the research playing through the same TextWorld home quests as the other models.", "Those who did not receive instructions on how to finish the quest never finished a single quest and gave up after an average of 184 steps on the small map and an average of 190 steps on the large map.", "When given instructions, human players completed the quest on the large map in an average of 23 steps, finishing the game with the maximum reward possible.", "Also note that none of the deep reinforcement learning agents received instructions.", "On both small and large maps, all versions of KG-DQN tested converge faster than baselines (see Figure 3 for the small game and Figure 4 for the large game).", "We don't show BOW-DQN because it is strictly inferior to LSTM-DQN in all situations).", "KG-DQN converges 40% faster than baseline on the small game; both KG-DQN and the LSTM-DQN baseline reaches the maximum reward of five.", "On the large game, no Figure 3: Reward learning curve for select experiments with the small games.", "agents achieve the maximum reward of 10, and the LSTM-DQN requires more than 300 episodes to converge at the same level as KG-DQN.", "Since all versions of KG-DQN converge at approximately the same rate, we conclude that the knowledge graphi.e., persistent memoryis the main factor helping convergence time since it is the common element across all experiments.", "After training is complete, we measure the number of steps each agent needs to complete each quest.", "Full KG-DQN requires an equivalent number of steps in the small game (Table 3) and in the large game (Table 4).", "Differences between LSTM-DQN and full KG-DQN are not statistically significant, p = 0 .", "199 on an independent T-test.", "The ablated versions of KG-DQNunpruned KG-DQN and non-pre-trained KG-DQNrequire many more steps to complete quests.", "TextWorld's reward function allows for a lot of exploration of the environment without penalty so it is possible for a model that has converged on reward to complete quests in as few as five steps or in many hundreds of steps.", "From these results, we conclude that the pre-training using our question-answering paradigm is allowing the agent to find a general understanding of how to pick good actions even when the agent has never seen the final Figure 4: Reward learning curve for select experiments with the large games.", "test game.", "LSTM-DQN also learns how to choose actions efficiently, but this knowledge is captured in the LSTM's cell state, whereas in KG-DQN this knowledge is made explicit in the knowledge graph and retrieved effectively by graph attention.", "Taken together, KG-DQN converges faster without loss of quest solution quality.", "We have shown that incorporating knowledge graphs into an deep Q -network can reduce training time for agents playing text-adventure games of various lengths.", "We speculate that this is because the knowledge graph provides a persistent mem-ory of the world as it is being explored.", "While the knowledge graph allows the agent to reach optimal reward more quickly, it doesn't ensure a high quality solution to quests.", "Action pruning using the knowledge graph and pre-training of the embeddings used in the deep Q -network result in shorter action sequences needed to complete quests.", "The insight into pre-training portions of the agent's architecture is based on converting text-adventure game playing into a question-answering activity.", "That is, at every step, the agent is askingand trying to answerwhat is the most important thing to try.", "The pre-training acts as a form of transfer learning from different, but related games.", "However, question-answering alone cannot solve the text-adventure playing problem because there will always be some trial and error required.", "By addressing the challenges of partial observability and combinatorially large action, spaces through persistent memory, our work on playing text-adventure games addresses a critical need for reinforcement learning for language.", "Text-adventure games can be seen as a stepping stone toward more complex, real-world tasks; the human world is one of partial understanding through communication and acting on the world using language." ]
[ "abstain", "method", "abstain", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "method", "result", "result", "result", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Entity alignment (EA) aims to find the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs.", "For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding process.", "In this paper, we propose an effective and efficient EA D ecoding A lgorithm via T hird-order T ensor I somorphism (DATTI).", "Specifically, we derive two sets of isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations.", "By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA.", "Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.", "Knowledge graphs (KGs) illustrate the relations between real-world entitiese.g., objects, situations, or conceptsand usually are stored in the form of triples ( subject, relation, object ) .", "Over recent years, a large number of KGs have been constructed to provide structural knowledge to facilitate downstream applications, such as recommendation systems (Cao et al., 2019) and question-answering systems (Zhao et al., 2020).", "Most KGs are independently extracted from different languages or domains.", "Thus, these KGs usually hold unique information individually but also have some shared parts.", "Integrating these cross-lingual / domain KGs could provide a broader view for users, especially for the minority language users who usually suffer from lacking language resources.", "As shown in Figure 1, entity alignment (EA) aims [Real Name] [Member] [Friend] [Enemy] Red Skull Captain America Avengers StevenRogers Winter Soldier [ ] [ ] [ ] [ ] Figure 1: An example of cross-lingual entity alignment.", "to find the equivalent entity pairs between KGs, which is a crucial step for integrating KGs.", "Existing EA methods are built on the same core premise: equivalent entity pairs between KGs have similar neighborhood structures (i.e., isomor-phism).", "Therefore, most existing EA methods (Wang et al., 2018; Sun et al., 2020b; Mao et al., 2020) could be abstracted into the same architecture (as shown in Figure 2): encoding the structural information of KGs into a low-dimensional vector space by Siamese graph encoders and then mapping equivalent entity pairs into the proximate space by alignment loss functions.", "For a long time, most researchers have regarded EA as a graph representation learning task and focused on improving graph encoders.", "Starting from the simplest graph encoder TransE (Bordes et al., 2013), the newest graph encoding methods are successively introduced into EA and achieve decent improvements.", "For example, GCN-align (Wang et al., 2018) first proposed to use graph convolutional networks (GCN) (Kipf and Welling, 2017) to encode KGs.", "RSN (Guo et al., 2019) introduces recurrent neural networks (RNN) (Graves et al., 2008) and biased random walk to exploit the long-term relational dependencies existing in KGs.", "Dual-AMN (Mao et al., 2021a) proposes the proxy-matching layer and normalized hard samples mining loss to speed up the training process.", "In stark contrast to the efforts on graph encoders, few researchers focus on improving EA decoding 5888 algorithms (Sun et al., 2020c), which have been proved to significantly improve performance and reliability in other fields, such as dependency parsing (Zmigrod et al., 2020) and machine translation (He et al., 2021).", "Earlier EA studies (Wang et al., 2018; Sun et al., 2017) simply calculate the similarities of each pair of entities and select the closest one as the alignment result.", "This naive strategy results in one entity may be aligned to multiple entities simultaneously, which violates the one-to-one constraint of EA 1 .", "Thus, some recent studies (Xu et al., 2020; Zhu et al., 2021) propose the global alignment strategy, i.e., regarding the decoding process as a one-to-one assignment problem that could be solved by the Hungarian algorithm (Kuhn, 1955).", "Overall, these studies just use existing decoding algorithms without further exploration of KGs' characteristics.", "Similar to graph encoders, we argue that a good EA decoding algorithm should also be capable of exploiting the structural information of KGs.", "In this paper, we propose an effective and efficient EA D ecoding A lgorithm via T hird-order T ensor I somorphism (DATTI).", "Different from recent studies (Fey et al., 2020; Mao et al., 2021b) that regard EA as a matrix (second-order tensor) isomorphism problem, we express the isomorphism of KGs in the form of third-order tensors, which could completely describe the structural information of KGs.", "Specifically, we derive two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations.", "By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA, thus significantly improving the performance.", "Besides, the introduction of third-order tensors will inevitably lead to a quadratic increase in space-time complexity.", "Therefore, we adopt the randomized truncated singular value decomposition algorithm (RTSVD) (Sarls, 2006) and Sinkhorn operator (Sinkhorn, 1964) to improve efficiency.", "To comprehensively evaluate our proposed method, we apply DATTI to three advanced EA methods with different kinds of graph encoders.", "Experimental results on two widely used public datasets show that DATTI can deliver significant performance improvements ( 3 . 9 % on Hits@1 and 3 . 2 % on MRR ) even on the most advanced EA 1 Most KGs usually have removed the duplicated entities within the same KG.", "methods.", "Furthermore, our decoding algorithm is highly efficient.", "The decoding time is less than 3 seconds, which is almost negligible compared to the time consumption of the training process.", "The main contributions are summarized as follows: We propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI), which consists of two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations.", "Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even applied to the SOTA method, while the extra required time is less than 3 seconds.", "A KG could be defined as G = ( E, R, T ) , where E , R , and T represent the entity set, relation set, and triple set, respectively.", "Given a source graph G s = ( E s , R s , T s ) and a target graph G t = ( E t , R t , T t ) , the goal of EA is to explore the one-to-one entity correspondences P e between KGs.", "The core premise of EA methods is that equivalent entity pairs between KGs have similar neighborhood structures.", "As shown in Figure 2, most of them could be summarized into two steps: (1) Using KG embedding methods (e.g., TransE, GCN, and GAT (Velickovic et al., 2018)) to encode entities and relations into low-dimensional embeddings.", "(2) Mapping these embeddings into a unified vector space through pre-aligned entity pairs and alignment loss functions.", "To organize existing EA methods clearly, we categorize them based on the encoders and enhancement strategies in Table", "1. 5889 Encoders and Losses .", "There are mainly two kinds of Encoders: Trans represents TransE (Bordes et al., 2013) and subsequent derivative algorithms.", "These methods assume that entity and relation embeddings follow the equation h + r t .", "Because of the easy implementation, the Trans encoders are widely used in early EA methods.", "More recently, Graph Neural Networks (GNN) gradually became the mainstream encoder because of their powerful modeling capability on graph structures.", "Inspired by language models, RSN proposes a biased random walk sampling strategy and uses RNN to encode the sampled sequences.", "As for alignment losses, the vast majority of EA methods (Wang et al., 2018; Wu et al., 2019; Mao et al., 2020) adopt contrastive losses, e.g., Triplet loss (Schroff et al., 2015).", "These loss functions share one core idea, attracting positive entity pairs and repulsing negative entity pairs.", "Enhancement .", "Due to the lack of labeled data, several methods (Sun et al., 2018; Mao et al., 2020) adopt iterative strategies to produce semi-supervised aligned entity pairs.", "Despite significant performance improvements, the time consumption of these methods increases several times more.", "Some methods (Xu et al., 2019; Yang et al., 2019) introduce textual information (e.g., entity name embeddings) as the initial features of GNN to provide a multi-aspect view.", "However, literal information is not always available in real applications.", "For example, there will be privacy risks when using user-generated content.", "Therefore, we will separately discuss these textual-based methods in the experiment section.", "As mentioned in Section 1, some studies (Xu et al., 2020; Wu et al., 2019) regard the decoding process as a one-to-one assignment problem.", "The assignment problem is a fundamental combinatorial optimization problem.", "An intuitive instance is to assign N jobs for N workers.", "The assignment problem is to find a one-to-one assignment plan so that the total profit is maximum.", "Formally, it is equivalent to maximizing the following equation: arg max P PN P , X F (1) X RN N is the profit matrix.", "P is a permutation matrix denoting the assignment plan.", "There are exactly one entry of 1 in each row and each column in P while 0 s elsewhere.", "PN represents the set of all N-dimensional permutation matrices.", "Here, F represents the Frobenius inner product.", "In the following, we describe our proposed decoding algorithm (DATTI), which consists of two sets of tensor isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations.", "Furthermore, we adopt the randomized truncated singular value decomposition (RTSVD) algorithm and the Sinkhorn operator to speed up the decoding process.", "Some recent studies (Fey et al., 2020; Mao et al., 2021b) regard EA as a matrix isomorphism problem.", "These methods assume that the adjacency matrices A s R | E s || E s | of source graph G s and A t R | E t || E t | of target graph G t are isomorphic, i.e., A s could be transformed into A t according to the entity correspondence matrix P e : P e A s P e = A t (2) P e [ i,j ] = 1 indicates that e i and e j are equivalent.", "However, matrices (second-order tensors) cannot fully describe the adjacency information of KGs, which is stored in the form of triples.", "Therefore, we use third-order tensors to express KGs to avoid the information missing from using matrices.", "Let A s R | E s || R s || E s | and A t R | E t || R t || E t | be the adjacency tensors of G s and G t .", "A [ h,r,t ] = 1 indicates that the triple ( h, r, t ) is in the KG.", "The matrix isomorphism Equation (2) could be generalized into the third-order form as follows: A s 1 P e 2 P r 3 P e = A t (3) where P r represents the one-to-one relation correspondence matrix between G s and G t and k represents the k -mode tensor-matrix product.", "As illustrated in Figure 3, Equation ( 3 ) can be interpreted as successively reordering the tensor along three axes.", "Since the number of triples | T | is usually much less than | E || R || E | , A s and A t are extremely sparse.", "Unfortunately, existing tensor computing frameworks (e.g., Numpy (Harris et al., 2020) and Tensorflow (Abadi et al., 2015)) can only provide few and limited operators for third-order sparse tensors.", "Therefore, we have to re-transform Equation ( 3 ) into the matrix form: A s 1 P e 2 P r 3 P e = A t P e A (1) s ( P e P r ) = A (1) t P r A (2) s ( P e P e ) = A (2) t P e A (3) s ( P r P e ) = A (3) t (4) here represents the Kronecker product, P e P r P ( | E || R | ) ( | E || R | ) .", "A ( k ) represents the modek unfolding matrix of the tensor A , e.g., A (1) = [ A [: , : , 0] A [: , : , 1] ... A [: , : , | E | ] ] R | E | ( | E || R | ) , where is the concatenate operation.", "When A s and A t are second-order adjacency tensors, the above equations degrade to Equation (2): A s 1 P e 2 P e = A t P e A (1) s P e = A (1) t (5) 4.2 Gramian Isomorphism Gramian matrix G ( A ) = AA reflects the inner correlations between each vector of matrix A .", "If we regard A as random variables, G ( A ) is equivalent to the uncentered covariance matrix.", "When A s and A t are isomorphic, their Gramian matrices A s A s and A t A t are isomorphic too: A t A t = ( P e A s P e )( P e A s P e ) = P e A s A s P e (6) Similar to adjacency matrices, the Gramian matrix isomorphism equation could also be generalized into the third-order form: P e G ( A (1) s ) P e = G ( A (1) t ) P r G ( A (2) s ) P r = G ( A (2) t ) P e G ( A (3) s ) P e = G ( A (3) t ) (7) Furthermore, it is easy to prove that the following equations hold for arbitrary depth l N : P e G ( A (1) s ) l P e = G ( A (1) t ) l P r G ( A (2) s ) l P r = G ( A (2) t ) l P e G ( A (3) s ) l P e = G ( A (3) t ) l (8) 4.3 Decoding via Isomorphism Although we have derived two sets of isomorphic equations, neither of them could be solved directly.", "These equations are equivalent to the quadratic or cubic assignment problem (Yan et al., 2016), which has been proved to be NP-hard (Lawler, 1963).", "Fortunately, these isomorphic equations could be used to enhance the decoding process.", "Let H es R | E s | d e and H rs R | R s | d r represent the entity and relation embeddings of G s .", "H et R | E t | d e and H rt R | R t | d r represent the embeddings of G t .", "Assume that these embeddings have been approximately aligned by EA methods: P e H es H et P r H rs H rt (9) As mentioned in Section 1, some recent studies (Xu et al., 2020; Sun et al., 2020c) regard the decoding process of P e as an assignment problem: arg min P e P | E | P e H es H e t 2 F arg max P e P | E | (cid:68) P e , H es H e t (cid:69) F (10) Since this simple decoding strategy does not utilize the structural information of KGs, we propose to introduce the adjacency and Gramian isomorphism equations into the decoding process.", "By combining Equations (4), (8), and (9), the connection between the 8 -tuple { A s , A t , H es , H et , H rs , H rt , P e , P r } could 5891 be described as follows, for arbitrary depth l N : P e G ( A (1) s ) l A (1) s ( H es H rs ) G ( A (1) t ) l A (1) t ( H et H rt ) (11) P r G ( A (2) s ) l A (2) s ( H es H es ) G ( A (2) t ) l A (2) t ( H et H et ) (12) P e G ( A (3) s ) l A (3) s ( H rs H es ) G ( A (3) t ) l A (3) t ( H rt H et ) (13) Detailed proof is listed in Appendix A. Although it looks complex, the above equations essentially have the same form as Equation ( 9 ).", "Take Equation ( 11 ) as an example, let H ls = G ( A (1) s ) l A (1) s ( H es H rs ) and H lt = G ( A (1) t ) l A (1) t ( H et H rt ) , Equation ( 11 ) can be simplified into as follows: P e H ls H lt (14) Therefore, P e could also be solved by maximizing the equation arg max P e P | E | (cid:28) P e , H l s H l t (cid:29) F .", "Theoretically, for arbitrarily depth l N , the result of P e should be the same.", "However, the above equations are based on the ideal isomorphic situation.", "In practice, A s and A t can not always be strictly isomorphic.", "In order to reduce the impact of noise existing in practice, P e should be fit for various l : L (cid:88) l =0 arg max P e P | E | (cid:28) P e , H ls H lt (cid:29) F arg max P e P | E | (cid:42) P e , L (cid:88) l =0 H ls H lt (cid:43) F (15) By Equation ( 15 ), we successfully integrate the adjacency and Gramian isomorphism equations into the decoding process of EA.", "Similar to the above, Equation ( 12 ) could obtain the relation alignment result P r .", "Because Equation ( 13 ) is equivalent to Equation ( 11 ), it only needs to solve either of them to obtain the entity alignment result P e .", "It is noted that entity scales | E s | and | E t | are usually inconsistent in practice, which is called the unbalanced assignment problem.", "Assuming that | E s | > | E t | , a naive solution is to pad the profit matrix with zeros such that its shape becomes R | E s || E s | .", "Randomized truncated SVD .", "The introduction of third-order tensors enables DATTI to fully describe the structural information of KGs.", "However, there is no such thing as a free lunch.", "The space-time complexity also increases quadratically.", "The main bottleneck is to compute H ls R | E s | ( d e d r ) and 0 0.2 0.4 0.6 0.8 1 0% 20% 40% 60% 80% FR-EN JA-EN ZH-EN 100% Figure 4: The singular value distribution of H ls obtained by TransEdge on DBP15K.The abscissa represents the top k % singular values, and the ordinate represents the proportion of these singular values in total.", "H lt R | E t | ( d e d r ) .", "Even with the sparse optimization trick, the complexity is still up to O ( ld r d e | T | ) , which is much worse than most GNN encoders O ( l ( d e + d r ) | T | ) (Mao et al., 2020).", "In Figure 4, we list the singular value distribution of H ls obtained by TransEdge (Sun et al., 2020a) on DBP15K.", "Interestingly, the distribution is highly concentrated in the top 20 %, which means the contained information of H ls is sparse and compressible.", "By dropping the smaller singular values of H ls and H lt , the space-time complexity could be significantly reduced.", "This paper adopts randomized truncated SVD (Sarls, 2006) to decompose matrices approximately and only retains the top % of the singular values of H ls and H lt .", "Sinkhorn operator .", "The first and most wellknown solving algorithm for the assignment problem is the Hungarian algorithm (Kuhn, 1955), which is based on improving a matching along the augmenting paths.", "The time complexity of the original Hungarian algorithm is O ( n 4 ) .", "Then, Jonker and Volgenant (1987) improve the algorithm to achieve an O ( n 3 ) running time.", "Besides the Hungarian algorithm, the assignment problem could also be regarded as a special case of the optimal transport (OT) problem.", "Based on the Sinkhorn operator (Sinkhorn, 1964), Cuturi (2013) proposes a fast and completely parallelizable algorithm for OT problem: S 0 ( X ) = exp ( X ) , S k ( X ) = N c ( N r ( S k 1 ( X ))) , Sinkhorn( X ) = lim k S k ( X ) .", "where N r ( X )= X (cid:19) ( X 1 N 1 TN ) and N c = X (cid:19) ( 1 N 1 TNX ) are the row and column-wise normalization operators of a matrix, (cid:19) represents the element-wise division, and 1 N is a column vector of ones.", "Then, Mena et al. (2018) further prove that the Sinkhorn operation could also solve the assignment problem as a special case of OT problem: arg max P PN P , X F = lim 0 + Sinkhorn( X / ) (17) The time complexity of the Sinkhorn operator is O ( kn 2 ) .", "According to our experimental results, a small k is enough to achieve decent performance.", "Compared with the Hungarian algorithm, the Sinkhorn operation is much more efficient.", "Therefore, this paper adopts the Sinkhorn operator to solve Equation (15).", "Our experiments are conducted on a PC with a GeForce GTX 3090 GPU and a Ryzen ThreadRip-per 3970X CPU.", "The code and datasets are available in Github 2 .", "To comprehensively evaluate the proposed decoding algorithm, we experiment with two widely used public datasets: (1) DBP15K (Sun et al., 2017) consists of three cross-lingual subsets from multilingual DBpedia.", "Each subset contains 15 , 000 entity pairs.", "(2) SRPRS (Guo et al., 2019).", "Each subset also contains 15 , 000 entity pairs but with much fewer triples compared to DBP 15 K. The statistics of these datasets are summarized in Table", "2. To be consistent with previous studies (Wang et al., 2018; Sun et al., 2018), we randomly split 30% of the pre-aligned entity pairs for training and development while using the remaining 70% for testing.", "All the results are the average of five independent runs.", "To ensure the universality, we evaluate DATTI on three advanced EA methods with different types of graph encoders: Dual-AMN (Mao et al., 2021a) is the SOTA of GNN-based methods; TransEdge (Sun et al., 2020a) is the SOTA of Trans-based methods; RSN (Guo et al., 2019) is the only EA method using RNN as the encoder.", "Furthermore, we choose the Hungarian algorithm ( Hun. ) as the decoding baseline, proven to be effective by recent EA methods (Xu et al., 2020; Zhu et al., 2021).", "Metrics .", "Following convention, we use Hits@k and Mean Reciprocal Rank (MRR) as the evaluation metrics.", "The Hits @ k score is calculated by measuring the proportion of correct pairs in the topk .", "In particular, Hits@1 equals accuracy.", "Hyper-parameter .", "For TransEdge, we retain the top = 20 % of the singular values of H ls and H lt .", "Since the output dimensions of Dual-AMN ( d e = 768 , d r = 128 ) and RSN ( d e = d r = 256 ) are much larger than TransEdge ( d e = d r = 75 ), we only set the retaining ratio = 2 %.", "Other hyper-parameters keep the same for all datasets and methods: iterations k = 15 ; temperature = 0 .", "02 ; max depth L = 3 .", "We list the main experimental results in Table", "3. Among these three EA methods, Dual-AMN beats other baselines by more than 5 .", "5 % on Hits@1 and 4 .", "2% on MRR , which indicates the advantages of GNN encoders.", "On RSN and TransEdge, the Hungarian algorithm shows decent performance improvements on Hits@1 by at least 3 .", "2 %.", "In contrast, the Hungarian does not positively affect Dual-AMN, probably due to the bi-directional nearest iterative strategy of Dual-AMN that has included the core idea of the Hungarian algorithm.", "Our proposed DATTI consistently achieves the best performances on all datasets and baselines.", "On DBP15K, DATTI delivers performance gains by at least 2 .", "8 % on Hits@1 and 3 .", "2 % on MRR .", "Especially for the SOTA method Dual-AMN, DATTI further raises the performance ceiling of EA by more than 3 .", "9 % on Hits@1 .", "On SRPRS, DATTI could significantly improve the performances of RSN and TransEdge.", "But for Dual-AMN, the improvements are much less.", "One possible explanation is that SRPRS removes too many triples, resulting in a lower performance ceiling.", "To explore the behavior of our proposed decoding algorithm in different situations, we design the following experiments:", "Time Efficiency .", "By adopting RTSVD and the Sinkhorn operator, our proposed decoding algorithm acquires high efficiency.", "Table 4 lists the time costs of the training and decoding process (DATTI) of three EA methods on DBP15K and SRPRS.", "DATTI only requires 3 seconds to obtain the result at most, which is negligible even compared to the training process of the fastest method Dual-AMN.", "Adjacency and Gramian Isomorphism .", "The core contribution of DATTI is to introduce the adjacency and Gramian isomorphism equations into the EA decoding process.", "To demonstrate their effectiveness, we independently add each of them on Dual-AMN.", "As shown in Table 5, both could slightly improve the performance (less than 1 . 6 % on Hits@1 ).", "Interestingly, the performance gain brought by their combination is greater than the sum of their independent gains, which means these two kinds of isomorphism equations could capture non-overlapping information.", "Iterations k and Temperature .", "The in the Sinkhorn operator is used to make distribution closer to one-hot, which is similar to the in the softmax operator.", "We set from 0 .", "01 to 0 .", "05 and report the corresponding performance curves of DATTI (Dual-AMN) on DBPZH EN in Figure 5.", "If we choose an appropriate value, the Sinkhorn operator will converge quickly to the optimal solution.", "Although theoretically needs to be close to zero, an over small will make the algorithm unstable because of the error of big floating-point numbers.", "In contrast, an over large will lead the algorithm to fail to converge.", "Depth L .", "Figure 6 lists the performances of DATTI (Dual-AMN) with different max depths L .", "In particular, L = 0 is equivalent to only using adjacency isomorphism equations to decode P e .", "When the depth L is less than 3 , each additional layer could deliver significant performance improvements on all subsets of DBP15K .", "When stacking more layers, the performance gains become negligible or even degrade, which indicates that over-smoothing (Kipf and Welling, 2017) also exists in DATTI.", "Retaining ratio .", "To reduce the space-time complexity of DATTI, we only retain the top % of the singular values of H ls and H lt .", "In Figure 7, we report the Hits@1 and time cost of DATTI (Dual-AMN) on DBPZH EN with different retaining ratios .", "From the observation, when the retaining ratio exceeds 2 %, the growth of Hits@1 becomes very slow, while the time cost still keeps quadratic growing.", "Therefore, = 2 % is the sweet spot between performance and efficiency in this situation.", "In practice, the retaining ratio could be adjusted according to computing resources and data scales.", "So far, all the experiments are based on pure structural-based EA methods.", "As mentioned in Section 3.1, some methods (Xu et al., 2020; Wu et al., 2019) introduce textual information (e.g., entity name) to provide a multi-aspect view.", "Specifically, Method DBPZH ENDBPJA ENDBPFR EN Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 GM-Align 0.679 0.785 0.740 0.872 0.894 0.952 RDGCN 0.697 0.842 0.763 0.763 0.873 0.957 DGMC 0.801 0.875 0.848 0.897 0.933 0.960 AtrrGNN 0.796 0.929 0.783 0.920 0.919 0.979 CREA 0.736 -0.792 -0.924 RAGA 0.873 -0.909 -0.966 -Init-Emb 0.625 0.756 0.680 0.807 0.848 0.919 +Hun.", "these methods first use machine translation systems or cross-lingual word embeddings to map entity and relation names into a unified semantic space and then average the pre-trained word embeddings to construct the initial features for entities and relations.", "In our opinion, since the initial features of entity H e and relation H r have been pre-mapped, these textual-based EA methods are more like decoding algorithms to eliminate the translation noise.", "In this situation, DATTI could also play a similar role even without any pre-aligned entity pairs.", "To make fair comparisons with these textural-based EA methods, we use the same entity name translations and pre-trained word embeddings provided by Xu et al. (2019).", "For DATTI, we retain the top 10 % of the singular values of H ls and H lt , while keeping other hyper-parameters the same.", "Table 6 lists the performances of DATTI and six baselines on DBP15K.", "Surprisingly, unsupervised DATTI outperforms all the supervised competitors, improves the performance on Hits@1 by more than 1 .", "3% .", "Besides showing the powerful competitiveness of DATTI, this result also indicates that existing textural-based EA methods have considerable redundancy.", "When the initial features have been pre-mapped, complex neural networks and pre-aligned entity pairs may not be necessary.", "In this paper, we propose an effective and efficient EA decoding algorithm via third-order tensor isomorphism (DATTI).", "Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result" ]
[ "Abstract Story visualization is an underexplored task that falls at the intersection of many important research directions in both computer vision and natural language processing.", "In this task, given a series of natural language captions which compose a story, an agent must generate a sequence of images that correspond to the captions.", "Prior work has introduced recurrent generative models which outperform text-to-image synthesis models on this task.", "However, there is room for improvement of generated images in terms of visual quality, coherence and relevance.", "We present a number of improvements to prior modeling approaches, including (1) the addition of a dual learning framework that utilizes video captioning to reinforce the semantic alignment between the story and generated images, (2) a copy-transform mechanism for sequentially-consistent story visualization, and (3) MART-based transformers to model complex interactions between frames.", "We present ablation studies to demonstrate the effect of each of these techniques on the generative power of the model for both individual images as well as the entire narrative.", "Furthermore, due to the complexity and generative nature of the task, standard evaluation metrics do not accurately reflect performance.", "Therefore, we also provide an exploration of evaluation metrics for the model, focused on aspects of the generated frames such as the presence/quality of generated characters, the relevance to captions, and the diversity of the generated images.", "We also present correlation experiments of our proposed automated metrics with human evaluations.", "1 1 Introduction While generative adversarial networks (GANs) have achieved impressive results on a variety of 1 Code and data: https://github.com/ adymaharana/StoryViz .", "image generation tasks (Zhu et al., 2019; Qiao et al., 2019), the task of story visualization (Li et al., 2019b) is a variation of image generation that is more challenging and underexplored.", "In this setting, there is a story which consists of a sequence of images along with captions describing the content of the images, e.g., a web comic.", "The goal of the task is to reproduce the images given the captions (Figure 1).", "The benefits of investigating this task are far reaching.", "It combines two interesting and challenging sub-areas: text-to-image synthesis and narrative understanding, providing an excellent test bed for exploring and developing multimodal modeling techniques.", "From an application perspective, such a system could be used to enhance existing textual narratives with visual scenes.", "This tool would be especially useful to comic artists, who are infamously overworked, allowing them to automatically generate initial drawings speeding up their workflow.", "Additionally, such a system would have many applications in an educational setting, allowing educators to cater to a more diverse set of learning styles by automatically generating visualizations for a given topic, such as the water cycle in a science lesson.", "Furthermore, the data in this domain is cartoon-style, meaning the generated images avoid many of the ethical issues associated with real-world data.", "For a more detailed discussion, see Section 9.", "The challenge of this task extends beyond tasks such as text-to-image or text-to-video synthesis.", "Namely, there is an explicit, narrative component to the data, which must first be accurately extracted from the text, and then consistently reproduced throughout the images.", "If the setting or a description of a character is provided in the first caption, this must be carried throughout the scene unless modified by a subsequent caption.", "Furthermore, the scenes in a single story can change drastically as the story progresses, requiring models to produce a greater variety of images than in a text-to-video task, which typically consists of short videos displaying a single action.", "To address these issues, we consider the task as proposed in Li et al. (2019b), which provides a baseline architecture, StoryGAN, along with datasets for the task.", "We introduce techniques that build on existing work and are focused on improving consistency across frames, resulting in images of higher visual quality.", "First, we augment the model with Dual Learning via video redescription.", "The output images are fed through a video captioning model, which is trained to reproduce the ground truth story captions.", "This provides an additional learning signal to the model, forcing it to semantically align with the given narrative.", "Next, we add a Copy-Transform module that can take generated images from previous timesteps and copy the most relevant features of those images into the next generated frame, thus making the images more consistent in appearance.", "Finally, we propose the use of Memory-Augmented Recurrent Transformer (MART) (Lei et al., 2020) to model the correlation between word phrases in the input text and corresponding regions in the generated image.", "The recurrent nature of MART allows for the learning of sophisticated interactions between the image frames, yielding images that are more consistent in terms of character appearances and background imagery.", "We call the model architecture with the aforementioned additions DU ( AL )-C O ( PY )-S TORYGAN or DUCO-STORYGAN.", "Next, we focus on exploring alternative evaluation methods for story visualization models.", "While modeling improvements are crucial for progressing in this domain, evaluating these models is a challenge in itself.", "Like many other generative tasks, it is nontrivial to evaluate a story visualization model.", "Human evaluation is the most reliable option, but its monetary and time costs make this ill-suited to be the only evaluation method.", "Most prior work relies upon standard GAN evaluation metrics, which may provide some insight into how well the images were reproduced, yet miss out on other aspects of the story visualization task, such as the visual consistency of the setting across frames and global semantic alignment.", "Therefore, we make evaluation another focal point of the paper, exploring a variety of automatic evaluation metrics, which capture various aspects of the task, e.g., evaluating the quality of the images, the relevance to the story, the diversity of the generated frames, and the model's ability to accurately represent the characters.", "We present results from our model and baseline models on all metrics along with qualitative results, demonstrating the improvements from our proposed techniques.", "Using these metrics, we also provide ablation analyses of our model.", "1. For the story visualization task, we improve the semantic alignment of the generated images with the input story by introducing dual learning via video redescription.", "2. We enable sequentially-consistent story visualization with the introduction of a copy-transform mechanism in the GAN framework.", "3. We enhance prior modeling techniques in story visualization with the addition of Memory Augmented Recurrent Transformer, allowing the model to learn more sophisticated interactions between image frames.", "4. We present a diverse set of automatic evaluation metrics that capture important aspects of the task and will provide insights for future work in this domain.", "We also conduct correlation experiments for these metrics with human evaluation.", "Li et al. (2019b) introduced the task of story visualization and the StoryGAN architecture for sequential text-to-image generation.", "There have been a few other works that have attempted to improve upon the architectures presented in this paper.", "Poro-roGAN (Zeng et al., 2019) aims to improve the semantic relevance and overall quality of the images via a variety of textual alignment modules and a patch-based image discriminator.", "Li et al. (2020) also improve upon the StoryGAN architecture by upgrading the story encoder, GRU network, and discriminators and adding Weighted Activation Degree (Wen et al., 2019).", "Song et al. (2020) is a more recent work which makes improvements to the StoryGAN architecture; the primary contribution is adding a figure-ground generator and discriminator, which segments the figures and the background of the image.", "Our model improvements of MART, dual learning, and copy-transform build upon more recent techniques and we support them with a detailed series of ablations.", "Text-to-Image and Text-to-Video Generation.", "While story visualization is an underexplored task, there has been plenty of prior work in text-to-image synthesis.", "Most papers in this area can be traced back to StackGAN (Zhang et al., 2017).", "Subsequent work then made various modifications to this architecture, adding attention mechanisms, memory networks, and more (Xu et al., 2018; Zhu et al., 2019; Li et al., 2019a; Yi et al., 2017; Gao et al., 2019).", "Huang et al. (2018) and Qiao et al. (2019) are direct precursors of our work.", "Both of these works subject the generated output as an image captioning task which attempts to reproduce the original text.", "Our proposed dual learning approach is an expansion of this module, where we use a state-of-the-art video captioning model based upon the MART (Lei et al., 2020) architecture to provide an additional learning signal to the model and increase the semantic consistency across images.", "In the domain of text-to-video synthesis, Li et al. (2018), Pan et al. (2017), Gupta et al. (2018) and Balaji et al. (2019) generate videos from single sentences.", "In contrast to videos, story visualization does not have the requirement that the frames flow continuously together.", "Therefore, it allows for more interesting interactions and story-level dynamics to be captured that would only be present in longer videos.", "Interactive Image Editing.", "Another task related to story visualization is interactive image editing.", "In this setting, rather than going from purely text to image, the model is given an input image along with textual instructions/directions, and must produce an output image that modifies the input image according to the text.", "This can take the form of high level semantic changes to the image, such as color and shape, as in Liu et al. (2020), Nam et al. (2018), and Chen et al. (2018), or this might take the form of Photoshop-style edits, as in Laput et al. (2013), Shi et al. (2020), and Manuvinakurike et al. (2018a).", "Alternatively, Cheng et al. (2020), Manuvinakurike et al. (2018b), and El-Nouby et al. (2019) are slightly closer to our task due to their sequential nature, where an image is modified repeatedly according to the textual feedback provided via a dialogue.", "However, unlike story visualization, these tasks do not have a narrative component.", "Furthermore, they involve repeatedly editing a single object at each timestep instead of generating diverse scenes with dynamic characters.", "Formally, the task consists of a sequence of sentences S = [ s 1 , s 2 , ..., s T ] and a sequence of images X = [ x 1 , x 2 , ..., x T ] , where the sentence s k describes the contents of the image x k .", "The model receives S as input and produces a sequence of images X = [ x 1 , x 2 , ..., x T ] , attempting to accurately reproduce X .", "As detailed in Li et al. (2019b), there are two aspects of this task.", "The first is local consistency, which is concerned with the quality of individual pairs in the sequence; an example is locally consistent if image x k accurately represents the contents of sentence s k .", "The second aspect is global consistency, which is concerned with the quality of the entire sequence.", "Namely, whether the sequence of images X accurately captures the content of the sequence of sentences S .", "The general approach to this task as followed by StoryGAN (Li et al., 2019b) is as follows: The story encoder creates the initial representation h 0 of the story S .", "This is then passed to the context encoder, which is a recurrent model that takes a sentence s k as input and forms a representation o k .", "Each of these representations o k are then fed to the image generator, which outputs an image x k .", "The generated images are passed to two discriminators, the image discriminator and story discriminator, which each evaluate the generated images x k in different ways and produce a learning signal that can be used to adjust the parameters of the network.", "The framework of our model is based on the StoryGAN architecture.", "We improve upon the context encoder and expand the network with dual learning and copy-transform mechanisms.", "The image and GRUI m age G ene r a t o r MART Transformer MemoryModule Story Encoder F u ll y C onne c t ed R epa r a m e t e r i z a t i on C ap t i on s P oo li ng Copy Transform Video Captioning Encoder Decoder C ap t i on s Story Discriminantor 3D Convolution Fully Connected Image Discriminantor 2D Convolution Fully Connected X emb Real/FakePrediction Real/FakePrediction F il t e r i ng S 2 h 0 h 0 m t 1 m t c k g k o k Figure 2: Illustration of DUCO-STORYGAN architecture.", "story discriminators, and the story encoder from the original model are retained in DUCO-STORYGAN; each contributes to a separate loss term i.e. L img , L story and LKL respectively.", "See Appendix for details on the loss terms.", "An overview of our model architecture can be seen in Figure", "2. MART Context Encoder.", "One of the primary challenges of story visualization is maintaining consistent background imagery and character appearances throughout the story.", "This is addressed with a recurrent context encoder which has access to the global narrative while encoding the caption in each time-step.", "We use the Memory Augmented Recurrent Transformer (MART) (Lei et al., 2020), where the memory is initialized with the conditioning vector h 0 from the story encoder.", "It takes word embeddings W k = [ w k 1 , w k 2 , ....w kL ] where w ij R 1 d w , corresponding to the frame caption at each timestep and produces contextualized embeddings which are then pooled to a single weighted representation c k using attention.", "This allows the context encoder to capture sophisticated interactions among the words which the image generator can then capitalize on: [ m k 1 , ....m kL ] , h k = MART ([ w k 1 , ....w kL ] , h k 1 ) c k = L (cid:88) i =1 ki m ki ; ki = exp ( m Tki u ) (cid:80) exp ( m Tki u ) where u is a query vector learned during training.", "The Transformer encoder is followed by a layer of GRU cells that take the contextualized embedding as input along with isometric Gaussian noise, (cid:15) k , and produce an output vector g k .", "The outputs c k and g k are concatenated and transformed into filters, and subjected to convolution with a projection of the sentence embedding s k , resulting in output vector o k .", "See Appendix for more details.", "Image Generator.", "The image generator follows prior text-to-image generation approaches (Qiao et al., 2019; Xu et al., 2018; Zhang et al., 2017) and uses a two-stage approach.", "The first stage uses outputs o k ; the resulting image is fed through a second stage, which aligns the contextualized word encodings m k from MART with image subregions generated in first-stage and reuses weighted encodings for image refinement.", "Dual Learning via Video Redescription.", "Dual learning provides the model with an additional learning signal by taking advantage of the duality of certain tasks, i.e., if X can be used to produce Y, then Y can be used to produce X. Here, our primary task is story visualization, and we consider the secondary task of video captioning.", "We refer to this process as video redescription.", "To execute the idea of learning via video redescription, we employ a video captioning network which takes the sequence of generated images and produces a corresponding sequence of captions.", "The video captioning network is based on a recurrent encoder-decoder framework ( V enc ( . ) , V dec ( . ) ) and is trained using a cross-entropy loss on the predicted probability distribution ( p ) over its vocabulary.", "Specifi-cally, L dual = (cid:80) Tk =1 (cid:80) Li =1 log p ki ( w ki ) .", "The hidden state in recurrent model helps the captioning network to identify narrative elements in the sequence of images and penalize the generative model for a lack of consistency in addition to semantic misalignment.", "We pretrain the video captioning network using ground truth data and freeze its parameters while training the generative model.", "We also include a multiplier, dual , which allows us to scale the loss.", "The implementation of the encoder-decoder framework can vary.", "For our primary model, we adapt the MART video captioning network (Lei et al., 2020) to accept a 2D matrix of features at each time step where each column corresponds to an image sub-region (see Sec. 5).", "Sequentially-Consistent Story Visualization.", "While certain components, such as character positions, will change from frame to frame, there are other components like background and appearances which usually carry over to adjacent frames.", "To take advantage of this continuity, we augment the model with a copy-transform mechanism.", "This mechanism can take into consideration the generated image from previous timesteps, and reuse aspects of those prior images during the current timestep.", "The copy-transform module F copy ( . ) performs attention-based semantic alignment (Xu et al., 2018) between word features m k RD w L in the current timestep and image features i k 1 RD i N from previous step.", "Each column of i k 1 is a feature vector of a sub-region of the image.", "The word features are first projected into the same semantic space as image features i.e. m (cid:48) k = Um k , where U RD (cid:48) D .", "For the j th image sub-region, the word-context vector is calculated as: c jk = L (cid:88) i =0 ji m (cid:48) ik ; jik = exp ( h Tj m (cid:48) ik ) (cid:80) Li =0 exp ( h Tj m (cid:48) ik ) jik indicates the weight assigned by the model to the i th word when generating the j th sub-region of the image.", "The weighted word-context matrix is then concatenated with the generative image features from the current timestep and sent for upsam-pling to the image generator.", "Objective.", "Bringing it all together, the final objective function of the generative model is: min G max I , SLKL + L img + L story + dual L dual where G , I and S denote the parameters of the entire generator, and image and story discriminator respectively.", "See Appendix for more details.", "Dataset.", "We utilize the Pororo-SV dataset from the original StoryGAN paper which has been adapted from a video QA dataset based on animated series (Li et al., 2019b) 2 .", "Each sample in Pororo-SV contains 5 consecutive pairs of frames and captions.", "The original splits of Pororo-SV from Li et al. (2019b) contain only training and test splits with nearly 80% overlap in individual frames.", "For a more challenging evaluation, we use the test split proposed in (Li et al., 2019b) as validation split (2,334 samples) and carve out an \"unseen\" test split from the training examples.", "The resulting dataset contains 10191, 2334 and 2208 samples in training, validation and test splits respectively.", "In this version, there is 58% frame overlap between the validation and train splits and 517 samples in the validation split contain at least one frame which is not present in the training set.", "Conversely, the test split has zero overlap with the training split.", "Experimental Settings.", "Our model is developed using PyTorch, building off of the original StoryGAN codebase.", "All models are trained on the proposed training split and evaluated on validation and test sets.", "We select the best checkpoints and tune hyperparameters by using the character classification F-Score on validation set (see Appendix).", "As with any task, evaluation is a critical component of story visualization; however, due to the complexity of the task and its generative nature, evaluation is nontrivial.", "For instance, characters are the focal point of any narrative and similarly should be the focus of a model when producing images for the story.", "Hence, Li et al. (2019b) measure the character classification accuracy within frames of generated visual stories in order to compare models.", "However, it is also important that the characters and background are consistent in appearance, and together form a cohesive story rather than an inde-pendent set of frames.", "Inspired by insights such as this, we explore an additional set of evaluation metrics that capture diverse aspects of a model's performance on visual story generation.", "Character Classification.", "We finetune the pretrained Inception-v3 (Szegedy et al., 2016) with a multi-label classification loss to identify characters in the generated image.", "Most earlier work in story visualization report the image-level exact-match (EM) character classification accuracy.", "However, 2 We opt to not use the CLEVR-SV dataset as we believe that this dataset lacks a narrative structure and is not suitable for story visualization.", "we contend that the exact match accuracy is not sufficient to gauge the performance of generative models, and the micro-averaged F-score of character classification should also be reported.", "For example, if Model A generates one of two characters in a frame with better quality than Model B (which generates none), it results in the same EM accuracy as Model B but an improvement in the recall/F-Score of the model, making the latter more reliable as a metric for quality.", "Our conclusion is based on the observation of consistent improvement in character classification scores with increasing training epochs and manual evaluation of image quality (see Fig 4).", "Video Captioning Accuracy.", "In order to measure global semantic alignment between captions and generated visualizations, we propose to use video captioning models which have been pretrained on ground truth data to identify narrative elements in a sequence of frames.", "We use the Memory-Augmented Recurrent Model proposed in Lei et al. (2020) and add a CNN encoder (Sharma et al., 2018) on top of the Transformer encoder to extract image embeddings.", "The final convolutional layer ( Mixed_7c ) in finetuned Inception-v3 is used to extract a local feature matrix f R 64 2048 (reshaped from 2048 8 8 ) for each image in the story.", "We then use this trained video captioning model to caption the generated frames.", "The generated captions are compared to the ground truth captions via BLEU evaluation 3 , and this functions as our proposed metric for measuring global semantic-alignment between the captions and generated story.", "This pretrained model is also used as the video captioning dual network during training of DUCO-STORYGAN.", "Discriminative Evaluation.", "Generative metrics such as BLEU are known to be noisy and unreliable.", "Hence, we also develop a discriminative evaluation setup.", "In order to compute similarity between generated image and ground truth, we compare the feature representations from either images in this discriminative setup.", "The training dataset for story visualization may contain one or more frames with the exact set of characters that are referenced in captions in the evaluation data.", "When we are checking for the presence of these characters in a generated image, we do not want to reward the model for copying the exact same frame from the training set 3 We use the nlg-eval package (Sharma et al., 2017) for BLEU evaluation.", "instead of generating a frame suited to the input caption.", "In order to evaluate this consistency, we propose discriminative evaluation of the story visualization model.", "Using the character annotations for the final frame of each sample in the test splits, we extract a set of 4 negative frames which are taken from elsewhere in the video but contain those specific characters (see Fig. 7 in Appendix).", "The human evaluation accuracy on this dataset is 89% ( =0.86) and is used as an upper bound when interpreting model accuracy performance.", "The cosine similarity between Inception-v3 features of final generated frame and candidate frames is computed and the frame with most similarity is selected as predicted frame.", "We report Top-1/2 accuracies.", "R-Precision.", "Several prior works on text-to-image generation report the retrieval-based metric R-Precision (Xu et al., 2018) for quantifying the semantic alignment between the input text and generated image.", "If there are R relevant documents for a query, the top R ranked retrieval results of a system are examined; if r are relevant, the R-precision is r/R .", "In our task 4 , R = 1 .", "The encodings from a pretrained Deep Attention-based Multimodal Similarity Model (DAMSM) are used to compute cosine similarity and rank results.", "Since this model only evaluates a single text-image pair for similarity, it is not suitable for evaluating story visualization.", "Therefore, we train a new version of DAMSM to extract global representations for the story and sequence of images, referred to as Hierarchical DAMSM (H-DAMSM) (see Appendix).", "The models used in the aforementioned evaluation metrics are trained independently of DUCOSTORYGAN on the proposed Pororo-SV splits and the pretrained weights are used for evaluation.", "See Appendix for other upper bounds.", "The results for Pororo-SV validation set can be seen in Table", "1. The first row contains the results using the original StoryGAN model (Li et al., 2019b) 5 .", "The second row functions as another 4 The R-precision score is obtained from 10 runs with 99 randomly picked mismatched story candidates in each run.", "5 We use a reduced training dataset as compared to the original StoryGAN paper (see Sec 4).", "However, we evaluate our StoryGAN code base on their exact splits and get 26.1% exact-match accuracy, which is approximately equivalent to the 27% reported in the original paper where they demonstrate that StoryGAN outperforms previous baselines such as ImageGAN, SVC, and SVFN.", "baseline, where we replace the GRU-based context encoder in StoryGAN with a Bidirectional Transformer (Devlin et al., 2019).", "The conditioning augmentation vector is not used to initialize the context encoder in this model since a non-recurrent Transformer lacks a hidden state.", "We see 1-2% improvements in character classification and retrieval with this model over StoryGAN.", "The third row contains results from the more recent CP-CSV model (Song et al., 2020) which uses figure-ground segmentation as an auxiliary task for preserving character features.", "Consequently, it results in 2.68% improvement in character classification over StoryGAN and smaller improvements for other metrics.", "The final row contains results with DUCOSTORYGAN, which significantly outperforms previous models (including CP-CSV) across all metrics.", "The character classification F-Score improves by 7.16% suggesting that the characters generated in our images are of higher visual quality.", "Similarly, we see consistent improvements in BLEU as well as R-Precision with our model.", "As demonstrated in Sec 7.1, the improvement in BLEU can be attributed to the addition of dual learning, which directly optimizes the dual task of video captioning.", "The R-Precision indicates that our model learns better global semantic alignment between the captions and images.", "Lastly, the Top-1/2 accuracy scores show that our model is learning to generate diverse images, rather than copying scenes that feature the same characters from the training data.", "DUCO-STORYGAN performs dramatically better than other models on the unseen test split (see Table 2).", "As can be seen in Fig 3, StoryGAN performs rather poorly on unseen samples compared to DUCO-STORYGAN.", "While the former produces images that are blurry and character shapes that are faint, the latter generates frames with sharp character features.", "This is reflected in the wide improvement margins on character classification scores in Table", "2. Similar improvements are also observed for BLEU and R-Precision metrics, indicating that our model generates images which are more relevant to the input caption.", "When generating stories for the Pororo-SV test split, models tend to copy background elements from the samples seen in the training set, since the captions lack sufficient information about the setting.", "Hence, we observe little improvement over random chance in the discriminative accuracy scores for different models on test split.", "For instance, instead of generating the tinted background in ground truth in Fig. 3, the models produce a clear blue sky which is closer to samples seen in the training set.", "However, discriminative evaluation will be valuable for future work in this domain when inputs contain detailed information about the visual elements.", "We also provide per character results for the Character F-Score.", "With DUCO-STORYGAN, we see up to 20% improvement for less frequent char-Win % Mean Rating Attribute Ours StGAN Ours StGAN Visual Quality 82% 3% 2.06 1.22 Consistency 78% 3% 2.94 1.78 Relevance 26% 2% 1.28 1.04 Table 3: Human evaluation on Likert Scale 1-5.", "We conduct human evaluation on the generated images from DUCO-STORYGAN and StoryGAN, using the three evaluation criteria listed in Li et al. (2019b): visual quality, consistence, and relevance.", "Two annotators are presented with a caption and the generated sequence of images from both models, and asked to rate each sequence on a scale of 1-5.", "Results are presented in Table", "3. With respect to pairwise evaluation, predictions from our model is nearly always preferred over those from StoryGAN (see Win% columns).", "Similarly, we see large improvements in mean rating of stories generated by DUCO-STORYGAN.", "However, we also see higher Tie% and low mean rating for the attribute Relevance, suggesting that much work remains to be done to improve understanding of captions.", "Correlation Experiments: We also examine the correlation between our proposed metrics and human evaluation of generated images.", "We compute the Pearson's correlation coefficient between human ratings of 50 samples on three different attributes using the 1-5 Likert scale and their corresponding automated metric evaluation scores.", "Significant correlation ( = 0 . 586 ) was observed between our proposed Character F-Score metric and Visual Quality, lending strength to its use an automated metric for story visualization.", "Table 4 contains plus-one ablations for DUCOSTORYGAN.", "The first row is the StoryGAN baseline and the second row is the StoryGAN + Transformer model, as discussed in Section 6.", "We then iteratively add each of our contributions and observe the change in metrics 6 .", "First, we upgrade the 6 Statistical significance is computed with 100K samples using bootstrap (Noreen, 1989; Tibshirani and Efron, 1993).", "Transformer encoder to MART, which brings about the largest improvements across all metrics.", "The use of word embeddings with access to global conditioning vector and attention-based semantic alignment proves important to the task of story generation.", "Next, we use the MART context encoder with our proposed dual learning and copy-transform improvements.", "With the addition of video captioning as a learning signal, we see 0.20% (p=0.071) improvement in character F-score and 1.12% improvement in R-Precision (p=0.032) over MART.", "The highest improvements are observed for BLEU score, since the model is optimized on video captioning.", "Next, we evaluate the addition of the copy-transform mechanism where features from generated images in previous timesteps are copied to the image in current timestep.", "We observe 1.04% improvements for character classification and a slight drop in performance on video captioning.", "Similarly, there is 1.14% improvement in Top-1 accuracy for the discriminative dataset.", "As discussed in Section 3, we explore a variety of implementations for the dual learning component of our model.", "While MART-based video captioning works the best, we provide a discussion of other approaches in the Appendix.", "Figure 5 contains two generated examples from the Pororo-SV dataset.", "The top row in each example contains the ground truth images, the middle row the images generated by StoryGAN, and the final row the images generated by our model.", "In All our improvements in DUCO-STORYGAN are statistically significant, except for discriminative evaluation and frame accuracy scores for the dual learning module.", "split of Pororo-SV dataset.", "example A, we demonstrate the superior visual quality and consistency of the frames generated by DUCO-STORYGAN, as compared to StoryGAN.", "The MART encoder allows our model to comprehend long captions as well as attend to each word while generating images.", "The retention of native character features throughout the story during regeneration can be attributed to the copy-transform mechanism in our model.", "In contrast, we see that both models fail at generating defined characters in example B. This may be due to the fact that Poby is an infrequent character in the dataset and hence, both models fail to learn its features.", "We perform visual analysis of the captions and predictions from DUCO-STORYGAN and observe two major recurring themes.", "First, the frequency of characters in the training data is a significant deciding factor for generated image quality.", "We looked at the samples that contained at least Pororo (most frequent character) and found that generated stories are better when there is only a single character in the frame's narrative as compared to multiple characters.", "This points to the inability of current story visualization models to align captions with multiple subjects/objects to the corresponding images.", "Second, generated images are poor for scenes containing infrequently occurring objects such as book/plate/boat/plane etc. in the caption.", "This behavior is expected since the model is unaware of real-world objects that do not already appear in the training set with sufficient frequency.", "Moreover, since the Pororo-SV dataset has been adapted from the annotations of a video QA dataset, the captions often contain information that can only span over multiple frames (Pororo wakes up and then says what to Crong. Pororo stares at Pororo and throws a ball), or cannot be visualized through images (Poby gets the box and Eddy asks not to open it.).", "Hence, our results with metrics like BLEU and R-Precision which are supposed to capture the relevance between images and caption stay relatively low (see Tables 1 and 2).", "In this paper, we investigate the underexplored task of story visualization.", "We improve upon prior modeling approaches and demonstrate the effectiveness of these new approaches by performing a robust set of ablation experiments.", "We also present a detailed set of novel evaluation methods, which we validate by demonstrating improvements across various baselines.", "Evaluation for story visualization is a challenging open research question in itself, and we hope that these methods will encourage more work in this domain.", "From an ethics standpoint, we provide a brief overview of the data that the model is trained on in Section 4 and a more detailed discussion in the Appendix.", "We provide some analyses of the data and refer the reader to the original StoryGAN paper, where the dataset was created, for further details.", "All of the language data consists of simple English sentences.", "Our experimental results are specific to the story visualization task.", "Pororo-SV is the most challenging story visualization task available; therefore, our results would likely generalize to other story visualization datasets.", "While story visualization is an exciting task with many potential future applications, the generated images still contain many obvious visual artifacts and therefore models trained on this task are still far from being deployed in any real world settings.", "Story visualization minimizes many of the ethical issues associated with image and video generation.", "DeepFakes, which are algorithmically generated fake images, have become increasingly problematic (Nguyen et al., 2019).", "Oftentimes, these images are indistinguishable from real images, raising privacy concerns and providing a source of misinformation.", "The images that we generate here are not subject to this same issue, due to the fact that they are Cartoons, and are therefore unable to be confused with real images.", "The focus of the task is not on the realism of the images, but rather on the multimodal narrative.", "Therefore, cartoons are actually better suited for the task as real-world images only add additional visual complexity that is not relevant to the narrative.", "We thank Peter Hase, Jaemin Cho, Hyounghun Kim, and the reviewers for their useful feedback.", "This work was supported by DARPA MCS Grant #N66001-19-2-4031, DARPA KAIROS Grant #FA8750-19-2-1004, ARO-YIP Award W911NF-18-1-0336, and a Google Focused Research Award.", "The views are those of the authors and not of the funding agency." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "result", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "Opinion role labeling (ORL) is an important task for fine-grained opinion mining, which identifies important opinion arguments such as holder and target for a given opinion trigger.", "The task is highly correlative with semantic role labeling (SRL), which identifies important semantic arguments such as agent and patient for a given predicate.", "As predicate agents and patients usually correspond to opinion holders and targets respectively, SRL could be valuable for ORL.", "In this work, we propose a simple and novel method to enhance ORL by utilizing SRL, presenting semantic-aware word representations which are learned from SRL.", "The representations are then fed into a baseline neural ORL model as basic inputs.", "We verify the proposed method on a benchmark MPQA corpus.", "Experimental results show that the proposed method is highly effective.", "In addition, we compare the method with two representative methods of SRL integration as well, finding that our method can outperform the two methods significantly, achieving 1.47% higher F-scores than the better one.", "Fine-grained opinion mining aims to detect structured user opinions in text, which has drawn much attention in the natural language processing (NLP) community (Kim and Hovy, 2006; Breck et al., 2007; Ruppenhofer et al., 2008; Wilson et al., 2009; Qiu et al., 2011; Irsoy and Cardie, 2013, 2014; Liu et al., 2015; Wiegand et al., 2016).", "A structured opinion includes the key arguments of one opinion, such as expressions, holders and targets (Breck et al., 2007; Yang and Cardie, 2012, 2013; Katiyar and Cardie, 2016).", "Here we focus on opinion role labeling (ORL) (Marasovic and Frank, 2018), which identifies opinion holders and Corresponding author.", "targets assuming that the opinion expressions are given.", "Figure 1 shows an example of the task.", "The focused task behaves very similar with semantic role labeling (SRL) which identifies the core semantic roles for given predicates.", "Earlier work attempts to exploit a well-trained SRL model to recognize possible semantic roles for a given opinion expression, and then map the semantic roles into opinion roles (Kim and Hovy, 2006; Ruppenhofer et al., 2008).", "The heuristic approach is unable to obtain high performance for ORL because there are large mismatches between SRL and ORL.", "For example, opinion expressions are different from verb/noun predicates in SRL, and meanwhile, opinion holders and targets may not always correspond to semantic agents (ARG0) and patients (ARG1), respectively.", "We can exploit machine learning based method to solve the mismatching problem between ORL and SRL.", "With a small number of annotated ORL corpus, we can feed the SRL outputs as inputs to build a statistical model for ORL.", "By this way, the model can learn the consistencies and inconsistencies between SRL and ORL, arriving at a full exploration of SRL.", "The method is essentially a feature-based method, treating SRL outputs as a source of features for ORL.", "The main drawback of the method is that direct exploration of SRL outputs may lead to the error propagation problem.", "SRL errors can be further propagated into ORL outputs, resulting in degraded ORL performance.", "method by using implicit semantic-aware word representations from SRL to enhance ORL.", "The method is referred to as SRL-SAWR for brief.", "Thanks to the recent advances of encoder-decoder neural SRL models (Zhou and Xu, 2015; He et al., 2017), we can extract implicit vectorized features from the intermediate encoder module instead, avoiding the direct exploration of the final one-best SRL outputs.", "The vectorized features from the encoder part are implicit semantic-aware representations for input sentences.", "By taking the semantic-aware representations from SRL as ORL inputs, we are able to make use of SRL information and meanwhile alleviate the error propagation problem.", "Here we exploit a neural conditional random field (CRF) model with deep bi-directional long short-term memory networks (Bi-LSTMs) as a baseline, most of which is borrowed from Katiyar and Cardie (2016) and Marasovi c and Frank (2018).", "Our preliminary experiments show that the model is able to achieve state-of-the-art performances for both ORL and SRL.", "Based on this model, we study the proposed implicit semantic-aware word representations for ORL.", "In addition, we compare this method with two other representative methods of SRL integration as well: one uses discrete SRL outputs as features directly for ORL and the other one exploits a multi-task-learning (MTL) framework to benefit ORL by SRL information.", "Experiments are conducted on the MPQA 2.0 dataset, which is a standard benchmark for opinion mining.", "Results show that SRL is highly effective for ORL, which is consistent with previous findings (Kim and Hovy, 2006; Ruppenhofer et al., 2008; Marasovic and Frank, 2018).", "Meanwhile, our implicit SRL-SAWR method can achieve the best ORL performance, 2.23% higher F-scores than the second best method.", "All the codes and datasets are released publicly available for research purpose under Apache Licence 2.0 at https://github.com/zhangmeishan/SRL4ORL.", "ORL aims to identify important opinion arguments for a given opinion expression.", "The task can be modeled as a sequence labeling problem, similar to SRL (Zhou and Xu, 2015; He et al., 2017).", "We adopt the { BMESO } schema to con-CRF h 1 h 1 t 1 h s h s t s h e h e t e h n h n t n . . . .", "vert opinion arguments into a sequence of boundary tags for each word, where B, M and E denote the beginning, middle and ending words of an argument, S denotes a single-word argument, and O denotes the other words.", "Formally, given a sentence w 1 w n and a span of opinion expression w s w e (1 s e n ) , we aim to assign each word in the sentence by a tag, outputting t 1 t n .", "Inspired by Katiyar and Cardie (2016) and Marasovic and Frank (2018), we exploit a deep Bi-LSTM CRF model as the baseline.", "Figure 2 shows the overall architecture of the baseline model.", "This model can achieve state-of-the-art performances for both ORL and SRL, which facilitates our study.", "The key components of the baseline model include three parts: word representation, the deep Bi-LSTM encoder and the CRF decoder.", "The word representation takes sequential words and opinion expressions as input, mapping them into dense-valued feature vectors x 1 x n .", "Following we extract high-level neural features based on the vectors by deep Bi-LSTM, arriving at h 1 h n .", "And finally a CRF decoder is applied to output the ORL results t 1 t n .", "SRL aims to find the core semantic arguments for a given predicate, which is highly correlative with the ORL task.", "The semantic roles agent (ARG0) and patient (ARG1) are often corresponding to the opinion holder and target, respectively.", "Several works even directly transfer semantic roles into opinion roles for ORL (Kim and Hovy, 2006; Ruppenhofer et al., 2008), treating opinion expressions as the major predicates.", "These systems can achieve good performances, indicating that SRL information can be greatly useful for ORL.", "Here we propose a novel method to encode Deep Bi-LSTM Word Representation Encoder SRL CRF Input ORL CRF Figure 3: SRL integration methods for ORL.", "the SRL information implicitly, enhancing ORL model with semantic-aware word representations from a neural SRL model (SRL-SAWR).", "Figure 3 shows the overall architectures of our SRL integration method.", "Instead of using the discrete outputs from the SRL model, the SRL-SAWR method exploits the intermediate encoder outputs as inputs for ORL, which can alleviate the problems in the above two methods.", "On the one hand, we do not rely on the discrete outputs of a well-trained SRL, reducing the error prorogation problem.", "And on the other hand, we handle ORL and SRL separately, avoiding the model structure dependencies between the two tasks.", "We assume that the external SRL system is a neural-based encoder-decoder model.", "For fair comparisons with FS-MTL, here we use the same deep Bi-LSTM CRF model for SRL as well.", "Thus the encoder outputs are the hidden vectors from deep Bi-LSTMs.", "Assuming that the dumped hidden vector sequence from the SRL encoder is h SRL 1 h SRL n , we integrate it into the ORL model by the following equation: x i = x i WSRL h SRL i , (1) where WSRL is a projection matrix which is a model parameter, x i is the baseline word representation of word w i , and x i is the new word representation, which will be further fed into the deep Bi-LSTM layer of the ORL model.", "Noticeably, the model parameters of the SRL encoder are also fine tuned according to the ORL objective, as the preliminary results indicate that fine-tuning can bring better performance.", "We exploit the MPQA version 2.0 corpus (Wiebe et al., 2005; Wilson, 2008) to evaluate our models,", "models, 1 which has been widely adopted as a benchmark dataset for opinion mining (Yang and Cardie, 2013; Katiyar and Cardie, 2016; Marasovic and Frank, 2018).", "There are 482 documents in the dataset.", "Following these work, we set aside 132 documents as the development set, and the remaining 350 documents are used as the test set in our experiments.", "We conduct experiments using five-fold cross-validation (CV) on the test set at the document level.", "Following Marasovic and Frank (2018), we focus on opinion holders and targets only.", "The gold standard opinion expressions, holders and targets correspond to the direct subjective annotations, agent annotations and target annotations, respectively.", "We use recall (R), precision (P) and their F1-measure value to measure our proposed models.", "The average values of the five-fold CV results are reported in this work.", "We exploit exact matching as the major metric.", "Following Marasovic and Frank (2018), two kinds of soft evaluation methods are also adopted for evaluation, namely binary and proportional overlapping, Binary overlap treats an entity as correct if it contains an overlapped region with the gold-standard entity, and the proportional overlap assigns a partial score proportional to the ratio of the overlapped region.", "There are several hyper-parameters to define our neural network structures.", "We simply set their values according to previous work (He et al., 2017; Marasovic and Frank, 2018), without much tuning work.", "Concretely, we set the dimension size of all embeddings to 100, the output hidden size of LSTMs to 200 and the layer number of Bi-LSTM to 3. For external word embeddings, we use the pretrained 100-dimensional glove embeddings (Pennington et al., 2014).", "We exploit online training to learn model parameters, and train on the entire training instances for 40 epochs, choosing the best-epoch model according to the performance on the development corpus.", "We use Adam (Kingma and Ba, 2014) with a learning rate 10 3 to update model parameters, and use gradient clipping by a max norm 1.0 and l 2 -regularization by a parameter 10 8 .", "We apply dropout with a ratio of 0.2 over word represen-1 Available at http://www.cs.pitt.edu/mpqa.", "tations, output layers of Bi-LSTMs to avoid over-fitting (Srivastava et al., 2014).", "For SRL, we use the large-scale dataset of CoNLL-2012 shared task, which is extracted from OntoNotes v5.0 corpus.", "The description and separation of train, development and test data set can be found in Pradhan et al. (2013).", "The training corpus contains over 250K predicates, which is much larger than the number of opinion expressions in the ORL training corpus (averaged 3.6K).", "We exploit the same neural network model as the ORL for SRL, in order to make fair comparisons between our proposed model with FS-MTL.", "According to the preliminary experiments, the SRL model can reach an F-measure of 81.8%, which is comparable to the reported result (81.7%) in He et al. (2017).", "Table 1 shows the final results on the test dataset.", "We report the overall as well as the fine-grained performance in term of opinion arguments (i.e., holder and target).", "Compared with the baseline system, our final SRL-SAWR model can bring sig-nificantly better results ( p < 10 5 under pairwise t-test).", "For fine-grained evaluations, the final model outperforms the baseline model consistently on opinion holders and targets.", "The tendencies are similar by exploiting the binary and proportional matching methods.", "The results show that SRL information is very helpful for ORL, which is consistent with previous studies (Kim and Hovy, 2006; Ruppenhofer et al., 2008; Marasovic and Frank, 2018).", "The implicit SRL-SAWR method is highly effective to integrate SRL information into the ORL model.", "Further, we compare the SRL-SAWR method with two other methods as well, namely SRL-TE and FS-MTL, respectively.", "The SRL-TE approach simply exploits the output SRL tags as inputs for ORL, embedding them as an additional source of word representations.", "The FS-MTL approach is exactly the proposed model by Marasovic and Frank (2018).", "As shown in Table 1, all three methods can bring improved performance by integrating SRL, further demonstrating that SRL is indeed valuable for ORL.", "In addition, the SRL-SAWR method can achieve the best performance among the three methods, obtaining further significant improvements by at least 63 .", "74 61 .", "51 = 2 .", "23 Model Holder Target Overall Exact F1 Baseline 73.07 42.70 58.30 SRL-SAWR 76.95 50.50 63.74 SRL-TE 75.89 46.27 61.46 FS-MTL 75.58 46.40 61.51 Binary F1 Baseline 81.57 68.34 75.15 SRL-SAWR 84.91 73.29 79.10 SRL-TE 83.47 68.79 76.33 FS-MTL 83.80 72.06 77.87 Proportional F1 Baseline 79.35 61.22 70.55 SRL-SAWR 82.82 67.31 75.08 SRL-TE 81.56 64.74 72.40 FS-MTL 81.67 65.18 73.61 Table 1: Final results on the test dataset.", "points on overall F1-measure with exact matching ( p < 10 4 ).", "For fine-grained evaluations, the SRL-SAWR method can also give the best performance.", "The results demonstrate that SRL-SAWR is most effective to integrate the SRL information into a neural ORL model.", "The two methods, SRL-TE and FS-MTL, are comparable by evaluations based on the exact matching.", "In this section, we conduct several experimental analysis on the test dataset to deeply understand the effectiveness of SRL information.", "First, we examine the relationship between SRL and ORL.", "SRL identifies the semantic arguments for predicates, and ORL recognizes the opinion arguments for opinion expressions.", "Intuitively, in most cases, the opinion holders are corresponding to semantic agents/ARG0 of opinion trig-gers/expressions, and similarly, the opinion targets are usually corresponding to patients/ARG1.", "Figure 4 shows the percentages of opinion hold-Model Holder Target Overall Baseline 73.07 42.70 58.30 SRL Mapping 68.56 25.33 46.29 Table 2: The performance of the SRL mapping method.", "ers/targets being corresponding to semantic roles, which are calculated according to the word-level mapping over the 1-best SRL outputs and the gold-standard ORL tags.", "We list only the five semantic roles with highest mapping percentages.", "As shown, the results are consistent with our intuition.", "Thus SRL and ORL are highly correlative.", "Considering the much larger scale of annotated SRL corpora, SRL can benefit ORL potentially.", "According to the above findings, we design a simple system by mapping SRL outputs into ORL directly (Kim and Hovy, 2006; Ruppenhofer et al., 2008).", "We simply convert the semantic role ARG0 into holder, and ARG1 into target.", "Table 2 shows the performance.", "The results of the baseline system are shown for comparison.", "We can see that the simple mapping method is also one feasible alternative as a whole.", "Further, we compare the SRL utilization capabilities of our proposed method and the other SRL-enhanced ORL systems, including the above SRL Mapping method.", "We categorize the opinion arguments by whether they can be directly mapped from the SRL outputs.", "The opinion arguments which can be directly mapped from SRL, referred to as consistent arguments, should be more easily identified by SRL enhanced models than the remaining inconsistent arguments.", "Table 3 shows the comparison results.", "We can see that all SRL-The white house is said to be embarrassed by the report holder ARG1 target target holder gold SRL SRL-TE FS-MTL SRL-SAWR target ARG0 holder holder target Figure 5: One example for case study.", "enhanced supervised models can achieve better performances for consistent arguments.", "For the inconsistent arguments, the tendency is similar, except the holder performance of SRL-TE.", "In addition, our method can gain much larger improvements, which indicates that our method can better handle the inconsistencies between SRL and ORL.", "Finally, we show one case study to illustrate the advantage of our SRL-SAWR method.", "Figure 5 shows one example.", "As shown, the SRL argument ARG0, which is more probably mapped onto holder, is annotated by target in the example.", "The SRL argument ARG1 is labeled as opinion holder, which is also one inconsistent case.", "Compared with SRL-TE and FS-MTL, our model can better handle these inconsistent cases.", "The observation further confirms our results in Table 3. 4 Conclusion We proposed a simple and novel method (SRL-SAWR) to enhance ORL with SRL information by exploiting implicit semantic-aware word representations from SRL.", "The main idea is to export intermediate SRL encoder outputs as inputs to better word representations of an ORL model.", "This method does not impose any extra requirement for ORL, and meanwhile avoids the error prorogation problem from discrete SRL outputs.", "We conducted experiments to verify our method on a benchmark MPQA dataset.", "The results showed that our method can exploit SRL information effectively.", "We compared the proposed method with SRL-TE and FS-MTL, which are two representative approaches to enhance ORL by SRL.", "The results demonstrated our method can bring the best performance among the three approaches.", "We thank all reviewers for their valuable comments.", "This work is supported by National Natural Science Foundation of China (NSFC) grants U1836222, 61602160 and 61672211." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "result", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "other" ]
[ "We teach goal-driven agents to interactively act and speak in situated environments by training on generated curriculums.", "Our agents operate in LIGHT (Urbanek et al., 2019)a large-scale crowd-sourced fantasy text adventure game wherein an agent perceives and interacts with the world through textual natural language.", "Goals in this environment take the form of character-based quests, consisting of personas and motivations.", "We augment LIGHT by learning to procedurally generate additional novel textual worlds and quests to create a curriculum of steadily increasing difficulty for training agents to achieve such goals.", "In particular, we measure curriculum difficulty in terms of the rarity of the quest in the original training distributionan easier environment is one that is more likely to have been found in the unaugmented dataset.", "An ablation study shows that this method of learning from the tail of a distribution results in significantly higher generalization abilities as measured by zero-shot performance on never-before-seen quests.", "A key hypothesis in the pursuit towards creating goal-driven natural language-based agents posits that interactivity and environment grounding is critical for effective language learning (Barsalou, 2008; Bisk et al., 2020; Ammanabrolu and Riedl, 2021).", "Text games provide a platform on which to interactively train agents that can both act and speak in a situated mannerproducing language that is both goal-driven and contextually relevant.", "Agents in text games operateperceiving, acting in, and speaking to others in a worldentirely using textual natural language.", "These games are structured generally as sequential decision making problems in the form of puzzles or quests that must be completed to advance in the game.", "crowdsourced fantasy text-adventure game, consisting of rich textual worldslocations, objects, and characters with personas, and questsmotivations for each character.", "To complete these quests, an agent must: (1) maintain character via its persona; and (2) reason in a partially observable world about potential actions and utterances based on incomplete descriptions of the locations, objects, and other characters.", "This requires several human like competencies such as commonsense reasoning, dynamic natural language understanding, and operating in combinatorially sized language-based state-action spaces.", "Although recent work has provided evidence showing that interactive language learning via reinforcement learning (RL) in text games can be significantly more sample efficient than static supervised learning (Ammanabrolu et al., 8099 2021) when creating goal-driven natural language agents, their ability to robustly generalize to novel scenarios is limited.", "In sequential decision making problems in particular, this generalization gap is the result of an agent simply memorizing trajectories, e.g. the sequence of actions and dialogues required to finish a game, and thus being unable to react in novel scenariosi.e. the agent learns from the head the training data and simply memorizes the long tail.", "One way of decreasing this generalization gap is by training agents on procedurally generated environmentswherein the agent learns a family of parametrized tasks with a significantly larger state-action spaces than singular environments, thus effectively making the memorization of trajectories impossible (Justesen et al., 2018; Cobbe et al., 2020).", "Drawing inspiration from all of these ideas, we create a method that learns to create a training curriculum of increasingly more difficult novel procedurally generated environments.", "Our contributions are threefold: (1) We present a method of parametrizing and generating a curriculum of environments in text games; (2) We show how to effectively train reinforcement learning agents on this curriculum; and (3) Provide an experimental study showing that our method enables significantly better generalization than those training on singular environments.", "This section describes our procedural generation pipeline as seen in Figure 2, starting with world and quest generation, followed by aligning both of them.", "There are two main kinds of models that we use for the different modules in this pipeline: retrieval and generative.", "The LIGHT Questing Environment.", "The LIGHT game environment (Urbanek et al., 2019) 1 is a multi-user fantasy text-adventure game consisting of a rich, diverse set of 1775 characters, 663 locations, and 3462 objects.", "Characters are able to perform templated actions to interact with both objects and characters, and can speak to other characters through free form text dialogues.", "Actions in text games generally consist of verb phrases (VP) followed optionally by prepositional phrases (VP PP).", "For example, get OBJ, put OBJ, give OBJ to CHAR , etc..", "These actions change the state of the world which is expressed through text descriptions.", "Quests in LIGHT (Ammanabrolu et al., 2021) take the form of a short motivation and goal action that is required reach the world state required to finish the game.", "For example, if the short motivation is Your motivation is to acquire a sword , then the corresponding goal state would be for the character to have a sword in their inventory and goal action would be get sword .", "This environment also contains a set of human expert demonstration of people speaking and acting in character while playing the quests mentioned above.", "Further details are found in Appendix A.1.", "World Retrieval.", "The first step of the pipeline involves choosing an initial character who will perform the quest.", "For this, we uniformly randomly sample from the set of characters found in the LIGHT-Quest training set.", "The corresponding character information includes a name and a description of the character's persona.", "Given this character information, we further retrieve the location where the character is most likely to be found.", "Retrieval models are trained to return the most highly correlated output for a given input in the dataset.", "For example, a retrieval model can be asked to return the most likely character that can be found at a particular location.", "These models compare a human annotated gold standard label with negative candidates drawn from the dataset.", "The negative candidates provide noise that the model must filter out in order to learn representations that let it best predict the gold label.", "These models are trained via a ranking loss that maximizes the scores of the gold label while simultaneously minimizing negative candidate score.", "At test time, the highest ranked candidate based on the score is selected as the model prediction.", "Specifically, we use a retrieval-based ranker model that checks for similarity of StarSpace (Wu et al., 2018) embeddings.", "Our choice of model is influenced by Fan et al. (2019) who report state-of-the-art retrieval performance for locations in LIGHT using this model.", "The overall ranker model first trains a randomly initialized StarSpace embedding model that is designed to correlate characters with the locations they are found in.", "It learns a single bag-of-words embedding that takes into account all the individual words contained within the inputencoding character and location information as well as the previously mentioned negative 8100 Select Initial Character: Dragon -I am a dragon living in the mountains.", "retrieval candidates.", "The rest of the training is similar to other retrieval models described earlier.", "The retrieved location information consists of a location name as well as a description of the location.", "Quest Generation.", "The quest is now generated using the existing character and location information.", "The generation-based models used in this pipeline are trained to return the most likely output sequence given an input sequence.", "Given a target sequence Y = { y 1 , ..., y M } and some input context vector via the encoders X .", "These models use autoregressive decoding techniques that factor the distribution over the target sequence into a chain of conditional probabilities with a causal left to right structure as P ( Y | X ; ) = (cid:81) M +1 i =1 p ( y i | y 0: i 1 , X ; ) where represents the current network parameters.", "At test time, a special start-of-sequence token is provided to the model which then proceeds to decode the rest of the output sequence using beam search.", "We train two BART (Lewis et al., 2020) models that encodes input information via a bidirectional transformer encoder and decodes autoregressively: the first takes as input character and location information and produces a short motivation (Sec-tion 2); the second takes as input character, location information, short motivation and produces the sequence of LIGHT game engine executable actions needed to achieve the motivation.", "This sequence of actions is provided by the human expert demonstrations as mentioned in Section 2.", "At this stage, the environment contains a motivated main character to perform a quest and a location for them to start in.", "We now focus on aligning the world with the quest to ensure that the quest is playable and achievable.", "Intuitively, to ensure that a quest is achievable, the world needs to contain all of the entitieslocations, characters, and objects mentioned within the quest.", "To this end, the alignment process involves training three BERT-based (Devlin et al., 2018) biencoder retrieval models to retrieve the most likely characters, locations, and objects required flesh the environment out and make the quest achievable.", "We use the same biencoder architecture proposed by Urbanek et al. (2019) which encodes context using one transformer and candidates with another scoring candidates via inner product between the two encoded vectors.", "The character retrieval model is conditioned on the initial character, quest, and locationproducing additional characters required to complete the world.", "We follow the setup in Ammanabrolu et al. (2021) and restrict worlds to only contains 2 characters at maximum but note that this method is extendable to greater numbers of characters.", "Similarly, the location retrieval model is also conditioned on the same thingsproducing, in this case, 4 neighbors to the initial location (resulting in worlds that are 5 locations large).", "These locations are connected to the initial location and a character can move between them by using commands such as go west , go up etc..", "Once these characters and locations are added to the world, the object retrieval model predicts the set of objects that are required to be distributed for each location given all the character information present in it.", "The final game environment instance is complete once this object set has been added.", "Generating Curriculums.", "We generate curriculums by building off of our procedural LIGHT game instance generation pipeline.", "We make the observation that the original quests in LIGHT are heavily skewed towards certain quest typeswith the majority involving goals and short motivations that contain objectives related to getting an object, and hitting or hugging another character (Figure 3).", "We further note that the first verb in the short motivation forms the basis of the quest for that agent.", "Actions in LIGHT, and more generally in text games, are executed in the game engines on the basis of verbsengine subroutines are linked to verbs with nouns forming argumentsand as such are primarily responsible for changing the state of the world.", "For example, get sword invokes the get subroutine that places an object, in this case a sword, in the character's surrounding into their inventory.", "As the quest is generated early in the pipeline, with the world and the rest of the components being conditioned on it, we can say that the first verb in the short motivation is an important dimension along which we can assess the distribution of individual LIGHT game instances.", "Thus, concretely, the verb counts from the short motivation aggregated over a set of quests represents the primary dimension along which we measure the distribution of quests.", "Parametrizing Curriculum Difficulty.", "Given the relative imbalance of this multinomial distribution, as seen in Figure 3, we hypothesize that a LIGHT agent only learns to do well on certain types of objectives and not othersmemorizing trajectories for less seen quest types, i.e. those found in the tail of the distribution.", "Preliminary evidence for this hypothesis is also seen in Prabhumoye et al. (2020), where they show a positive correlation between the number of instances of a particular type of quest during training and the final test goal-achievement performance.", "Based on these observations and our initial hypothesis, we use this particular dimension to parametrize curriculum difficulty for training LIGHT agentsquest types that are rarer in the initial training data will be harder for the agent to generalize to in a zero-shot setting.", "Intuitively, we seek to create curriculums that contain a diverse set of game instances with quest types that are not often found in the initial training data.", "Our earlier observations let us hypothesize that this will enable the LIGHT agent to more effectively learn from rare instances of quests as op-g e t g i v e hu g h i t f o ll o w t a k e s t e a l p u t g r ee t s h o w d r o p r e m o v e k ill e a t m a k e u s e w a n t b e f r i e n d n ee d s ee 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 N o r m a li z e d V e r b C o un t Short Motivations Distribution in Initial Quests Figure 3: Normalized top-20 verb count distribution of short motivations of the LIGHT-Quests dataset.", "posed to memorizing the corresponding trajectories.", "To this end, the generated curriculums each consist of a pool of quests with steadily decreasing quest type imbalance.", "In our case, this imply that the flatness of the multinomial distribution increases until it tends towards being uniform with respect to the categorical quest type variable.", "This is done by running the procedural generation pipeline iteratively until the number of instances for the highest count quest type is within n of the lowest count quest type.", "The total number of additional generated instances is held fixed across curriculums, only the task distribution of quest types within each curriculum changes.", "Figure 6 shows that decreasing n has the intended effect of decreasing imbalance with respect to verb types.", "Generating using this pipeline has the added effect of increasing diversity within the pool of each available quest type.", "One measure of diversity within the pool of a single quest type is the types of nouns contained within the short motivationsthese generally correspond to the characters, locations, and objects mentioned.", "Figure 6 shows that decreasing imbalance in the verb types for a short motivation also results in decreasing imbalance in noun types, once again corresponding to decreasing n .", "Short motivation gen-8102 Persona + Motivation Full action/dialoguehistory Setting Encoder Action & Dialogue Policy Networks Update GameEngine Action Utterance LIGHT Environment DM Partner LIGHTAgent Update Reward Reward Figure 5: Architecture and training pipeline for the LIGHT RL Agent (Ammanabrolu et al., 2021).", "eration is one of the first steps in the pipeline, i.e. the rest of the pipeline is conditioned on it, and as such increasing the flatness of the distribution there has the effects of increasing distribution for downstream components.", "A2C Curriculum Training.", "Overall training is done via A2C (Mnih et al., 2016) a policy gradient algorithm that maximizes long-term expected reward by comparing the advantage A ( s t , a t ) of taking an action a t in a state s t to the average value of taking any valid action as predicted by the critic V ( s t ) .", "The setup and network architectures used are similar to Ammanabrolu et al. (2021) and are summarized in Figure 5.", "At every step, the LIGHT agent receives as input the text describing the setting, the character's persona & motivation, and the full dialogue history.", "This is then encoded using a transformer based encoder and sent to the action and dialogue policy networks which output an ac-tion/dialogue utterance.", "These are then passed into the LIGHT environment which process them and returns rewards to be used by the agent.", "Rewards.", "As seen in Figure 5, all actions, either those of the agent-in-training or the partner agent, are processed by the engine, checking for goal state completionhence known as act goals .", "For example, if the LIGHT agent had the motivation to acquire a sword, the goal could be completed via a: self act completion : where the agent acquires a sword itself by picking it up, stealing it, convincing the partner to drop theirs so you can pick it up, etc. partner act completion : where the agent uses dialogue utterances to convince their partner to achieve the goal for them (e.g., by persuading the Pipeline Step Model Hits@10 F1 Ppl World Generation Location Biencoder 0.543 0.153 Object Biencoder 0.563 0.154 Character Starspace 0.653 0.289 Quest Generation Short Motive BART -0.488 7.55 Goal Action BART -0.763 3.75 Table 1: Procedural generation evaluation showing metrics for each individual model in the pipeline. partner to give them the sword).", "The naturalness of the dialogue utterances is further rated by a learned Dungeon Master (DM), a transformer-based ranker model trained on human demonstrations to score how relevant the utterance is given the character's persona and motivation.", "Further training details are provided in Appendix A.1.", "We conduct two separate evaluations: the first measures the effectiveness of the various models in the procedural environment generation pipeline as well as the effectiveness of the pipeline as a whole.", "The second provides zero-shot ablations of the LIGHT RL agents trained on the resulting curriculums and answers the questions (1) how does the relative difficulty of the training quests effect test", "perfor-mance?; (2) how does the diversity of the environments during training effect test", "performance?; and (3) how are the results of the previous questions affected by pre-training?", "All of the models in the pipeline described in Section 2 are trained using only the training set of the original LIGHT and LIGHT-Quests data.", "LIGHT-Quests inherits characters, locations, and objects from the original LIGHT dataset and adds on motivations and goals in the form of quests.", "Thus, the character, location, and object retrieval models are evaluated on the LIGHT unseen test set and the motivation and goal generation models are evaluated on the LIGHT-Quests test set.", "We report the standard array of metrics: hits@10 and F1 ranking prediction score for retrieval models; and F1 (as a harmonic average of BLEU-1 (Papineni et al., 2002) and ROUGE-1 (Lin, 2004)) and perplexity for generative models.", "Hyperparameters for all models are found in Appendix A.6.", "(1) character retrieval is easier than retrieving location and objects; and (2) goal action generation is easier than motivation generation.", "We hypothesize that the first trend is a direct consequence of the fact that generated motivations and goals regularly contain the names of the characters involved but mostly leave implicit information such as the objects requirede.g. the action hit dragon as a knight would require a weapon such as a sword to be equipped first.", "The second trend stems from the fact that goal actions can often be thought of as condensed version of the short motivationnumber of tokens required to generate goal actions is far less than short motivations.", "This implies that the goal action model is akin to a summarization model as opposed to the short motivation model which has the more difficult task of generating the motivation with only initial persona and location information.", "This evaluation tests the LIGHT RL agent's ability to zero-shot generalize to unseen environments.", "For all experiments in this study, agents were each zero-shot evaluated on 211 human demonstrations from the LIGHT-Quests test set for a single episode per quest across three independent runs.", "They were measured on the basis of whether or not they were able to achieve their goals in the environments conditioned on their personas: act goals measuring their ability to act consistently, and speech goals reflecting their ability to speak naturally.", "The study ablates across three dimensions in order to answer the posed research questions relating to: (1) curriculum difficulty, (2) curriculum diversity, and (3) agent pre-training.", "Curriculum Difficulty.", "To measure the overall effectiveness of the distribution tuning technique shown in Section 3, we vary the parameter n used to measure curriculum difficultynote that a lower n corresponds to a flatter distribution and so is higher difficulty.", "As seen in Fig. 6, we generate pools of quests with steadily increasing difficulty with varying n based on the range of the original untuned distributionwith the agents being trained on each pool separately as well as all of them in sequence through a curriculum.", "Agents received 10 7 total environment interactions per parallel A2C agent in a batch of 16.", "For the curriculum learning method, the agent received 2 .", "5 10 6 interactions per pool of quests starting with the initial pool of untuned quests and then sequentially with n = 64 , 16 , 2 resulting in a total of 10 7 total environment interactions per parallel A2C agent.", "Curriculum Diversity.", "The variations in the combinations of quests and worlds themselves seen at training time has potential to effect zero-shot performance (Samvelyan et al., 2021).", "We introduce two baselines that change the relative diversities of resulting quests in the curriculums, to contrast with our proposed procedural generation pipeline.", "Generated quest details are found in Appendix A.5.", "Sampled Curriculums.", "Inspired by Chawla et al. (2002); Graves et al. (2017), we explore an alternate method of creating curriculums by simply oversampling the same rare quests found in the tails of the distributions.", "This method does not generate new environments via the pipeline, instead choosing to sample rarer instances of quests with a higher weight when initializing each parallel A2C ac-tor.", "This means that the distribution of verbs looks similar to what it is in Figure 6 but the quests within a pool are repeated multiple times and so contain no new diversity.", "Randomly Generated Curriculums.", "On the other side of the diversity spectrum, we test a method that follows the same steps as the pipeline proposed in Section 2 with the modi-fication that the selection process for each step in the pipeline is random.", "The characters, objects, location are randomly selected and the generated motivations per character are conditioned on these randomly created worlds.", "This results in a significantly higher diversity of quests per poolat the expense of the relative coherence of the overall environment.", "Scratch.", "No pre-training is done, the encoder is a 3-layer randomly initialized transformer and trained along with the policy networks.", "Adaptive.", "Pre-training is done on the tasks introduced in Ammanabrolu et al. (2021) by training a 12 layer transformer with 256 million parameters using a cross-entropy loss as seen in Humeau et al. (2020).", "These weights are then transferred to the encoder used during RL training then frozen with 3 randomly initialized-layers appended.", "The encoder is multi-task trained on both pushshift.io Reddit (Baumgartner et al., 2020) and the commonsense dataset ATOMIC-LIGHT (Am-manabrolu et al., 2021), giving the agent general priors on how to act and speak.", "It is then fine-tuned in LIGHT, giving the agent further domain specific priors.", "Specific task details are provided in Appendix A.1.", "Analysis.", "Table 2 presents the results of this evaluation.", "We first report that the overall proportion of a pool of procedurally generated environments that contain achievable quests or goals for a single curriculum is 0 .", "89 .", "This metric provides a proxy for measuring the accuracy of the alignment process and the overall error rate of the pipeline.", "The high achievability rate means that only a small proportion of LIGHT RL A2C agents will waste Expt.", "environment interactions learning from quests that cannot be completedincreasing this rate even further would likely also improve sample efficiency.", "Further, we see that just the distribution tuning by itself shows no significant gains in performance over the baselines trained on the original data and in fact loses performance in certain cases.", "In contrast, learning from the individually tuned quest pools in a sequential curriculum increases performance significantly.", "This appears to indicate that LIGHTRL agents need to be trained with quests pools of steadily increasing difficultystarting immediately on a set of quests with a high proportion of rare, generated quests can degrade performance.", "The significantly increased performance of the procedurally generated curriculums over the sampled and randomly generated curriculums indicates the relative importance of diversity within a single quest typebut only up to a certain extent.", "The sampled quests contain multiple instances of the same quest type but the generated ones have higher 8105 variabilityleading to an increased observation space, ensuring that the agent cannot simply memorize trajectories.", "On the other hand, randomly generated quests have even higher variability but sacrifice relative coherenceit is more likely that the world contains unlikely scenarios, e.g. a desert and swamp being located right next to each other resulting in significantly decreased performance.", "We'd finally like to note that the adaptive pre-trained model takes advantage of the generated curriculums and distribution tuning more than the non-pre-trained scratch encoder, showing consis-tenly higher performance across the board.", "We hypothesize that this is likely a consequence of the adaptive model having greater model capacitythe pre-training enabling it to learn generalizable representations of the generated environments.", "Overall, trends in performance are independent of pre-trainingboth the scratch and the adaptive pre-trained model benefit significantly from learning from the procedurally generated curriculums.", "Text-based Game Playing and Generation.", "Recent text game playing works have focused on tackling three primary challenges: (1) how to represent agent knowledge to effectively operate in partially observable environments (Adhikari et al., 2020; Sautier et al., 2020); (2) scaling RL algorithms to handle combinatorial natural language state-action spaces (Zahavy et al., 2018; Ammanabrolu and Hausknecht, 2020; Jang et al., 2021); and (3) giving agents commonsense priors to better reason about the world (Murugesan et al., 2020, 2021) On the flip side, we have procedural generation of games with works such as Short and Adams (2017); Risi and Togelius (2019); Khalifa et al. (2020) that focus on creating content especially for 2D visual games via search or reinforcement learning based methods.", "Ammanabrolu et al. (2020b,a) use knowledge graphs to ground language and produce worlds and quests separately for text games from existing corpora such as stories.", "Fan et al. (2019) leverage LIGHT to learn to generate interactive fiction worlds on the basis of locations, characters, and objectsthis work is closest in spirit to our own World Generation module later on.", "They all focus on either generating or playing games.", "Pietquin et al., 2011; Fatemi et al., 2016) and response generation (Li et al., 2016) have used RL to boost performance.", "As noted by Ammanabrolu et al. (2021), the negotiation tasks of (Yarats and Lewis, 2017; Lewis et al., 2017), where two agents are trying to convince each other to perform certain actions, are related to the tasks in LIGHT-Quests.", "These works all lack environment grounding.", "Curriculum Learning.", "Curriculums in reinforcement learning have traditionally been used to set goals of steadily increasing difficulty for an agent (Bengio et al., 2009; Schmidhuber, 2013).", "The difficulty of these curriculums are generally measured difficulty via proxy of agent performance (Narvekar et al., 2020)methods either choose to adversarially set goals of steadily increasing difficulty (Sukhbaatar et al., 2018; Racaniere et al., 2019; Dennis et al., 2020; Campero et al., 2021) or to maximize learning performance based on environment instances an agent finds difficult historically (Graves et al., 2017; Portelas et al., 2020).", "While we were inspired by these works, they all focus on searching for goals for agents which can be difficult to scale to complex tasks such our own natural language motivation-based goals.", "We'd also like to note that most works using procedural generation to benchmark RL agents such as Cobbe et al. (2020); Kttler et al. (2020); Samvelyan et al. (2021) rely on the underlying richness of the game engines to generate novel environments as opposed to learning to generate.", "We focus on the problem of improving zero-shot generalization abilities of goal-driven RL agents to act and speak via natural language.", "An (obviously) key component of achieving this is to train the RL agents on a balanced training dataset that matches the test data in distribution.", "As this is an unlikely scenario in most real-world applications, we make the observation that we can artificially augment our pool of training environments by generating curriculums to mimic this.", "In our text game domain, with goal-driven situated natural language agents, we hypothesizeand gather supporting evidence suggestingthat an effective way to parametrize such distributions is by looking at the primary verbs within an agent's motivation and bringing the distribution of verb types as close to uniform as possible.", "Curriculum training significantly increases an agent's ability to generalize to novel scenarios.", "As noted by Urbanek et al. (2019) and Ammanabrolu et al. (2021), the ability to speak and act in these textual fantasy worlds has implications for domains beyond text-games.", "Text games are a platform where agents can interact in a relatively isolated environment and learn to interactively communicate effectively through natural language in a situated manner.", "Our methods use both large language models and deep reinforcement learning and are prone to the pitfalls that other contemporary methods using these techniques face, especially in the areas of dialogue and text game systems.", "We mitigate this first pitfall by restricting our current system to a retrieval based dialogue, ensuring that we can filter out non-normative dialogue usages beforehand, though we will note that the system can be extended to generative systems as described in Prabhumoye et al. (2020).", "Further, the LIGHT dataset is crowdsourced and contains data biases that can be attributed to the crowdworkers tasked with creating the data.", "Dinan et al. (2020) provides an in depth discussion regarding the inherent dataset biases, such as gender bias in the distribution of characters, in LIGHT and techniques to mitigate themwe follow these methods to reduce their effects on both the environment generation and agent training procedures." ]
[ "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "In many natural language processing applications, identifying predictive text can be as important as the predictions themselves.", "When predicting medical diagnoses, for example, identifying predictive content in clinical notes not only enhances interpretability, but also allows unknown, descriptive ( i.e., text-based) risk factors to be identified.", "We here formalize this problem as predictive extraction and address it using a simple mechanism based on linear attention.", "Our method preserves differentiability, allowing scalable inference via stochastic gradient descent.", "Further, the model decomposes predictions into a sum of contributions of distinct text spans.", "Importantly, we require only document labels, not ground-truth spans.", "Results show that our model identifies semantically-cohesive spans and assigns them scores that agree with human ratings, while preserving classification performance.", "Attention-based neural network architectures achieve human-level performance in many document classification tasks.", "However, understanding model predictions remains challenging.", "Common feature attribution methods are often inadequate, because the features of a document classification model individual words or their embeddings tend to have limited or ambiguous meaning in isolation, and must instead be interpreted in context.", "Rather than examining the importance of individual words and passing the contextualization task to the end-user, we may wish to extract distinct spans of text, such as sentences or paragraphs, and quantify the effect of each span on model predictions.", "However, the appropriate span boundaries depend on the document type, and processing all possible spans individually is computationally prohibitive.", "In some settings, understanding model predictions can be as important as the predictions themselves.", "When predicting medical diagnoses from clinical notes, for example, attributing predictions to specific note content assures clinicians that the model is not relying on data artifacts that are not clinically meaningful or generalizable.", "Moreover, this process may illuminate previously unknown risk factors that are described in clinical notes but not captured in a structured manner.", "Our work is motivated by the problem of autism spectrum disorder (ASD) diagnosis, in which many early symptoms are behavioral rather than physiologic, and are documented in clinical notes using multiple-word descriptions, not individual terms.", "Morever, extended and nuanced descriptions are important in many common document classification tasks, for instance, the scoring of movie or food reviews.", "Identifying important spans of text is a recurring theme in natural language processing.", "In extractive summarization , a document summary is created by selecting and concatenating important spans within a document (Narayan et al., 2018); and in many question answering tasks, including in the Stanford Question Answering Dataset (Rajpurkar et al., 2018), the goal is to identify a span within a paragraph of text that answers a given question.", "In both cases, training typically relies on ground truth spans, i.e. , correct start and end positions are available during training, which the model learns to predict.", "In contrast, our goal is to identify distinct spans within a document that, taken together, are sufficient to predict its associated label.", "In this task, which we call predictive extraction , ground truth spans are not available; instead, training is based on document labels alone, and without predefined spans, e.g. , sentences or paragraphs.", "Moreover, similar to feature attribution methods, we wish to assign scores to each span such that predictions are effectively decomposed into the contributions of individual spans.", "In the current work, which for simplicity focuses on binary classification, we achieve this by summing individual span scores to obtain the log-odds of a positive label.", "Since correct start and end positions are not known, they are represented as latent variables that must be learned to ( a ) optimize classification performance, and ( b ) satisfy additional span constraints; in particular, we wish to ensure that spans are concise, and do not significantly overlap.", "A brute-force approach in which all sets of spans satisfying these constraints are evaluated is computationally intractable, as the number of possibilities is O ( n k ) , where n is the length of the document and k is the number of spans.", "Alternatively, predicting discrete start and end positions would introduce categorical latent variables, necessitating the use of a continuous relaxation (Jang et al., 2016; Maddison et al., 2016) or gradient estimation alternatives (Tucker et al., 2017).", "Instead, we formulate a simple but effective approach in which span representations are derived directly from a continuous (probabilistic) representation of the start and end positions, avoiding more computationally expensive gradient estimation; and the positions themselves, are predicted using linear attention.", "Our contributions are as follows: We define predictive extraction and describe its importance particularly for prediction tasks in which model performance exceeds human performance.", "We formulate SpanPredict, a neural network model for predictive extraction in which predicted log-odds are formulated as the sum of contributions of distinct spans.", "We quantify prediction and span selection performance on five binary classification tasks, including three real-world medical diagnosis prediction tasks.", "In the context of these studies, we quantify the effect of span constraints on performance.", "Explaining neural network predictions is a wellknown problem, one that is particularly challenging in natural language processing, due to the presence of complex semantic structure and interdependencies (Belinkov and Glass, 2019).", "The importance of individual words, or their embeddings, can be quantified using word-pooling strategies in which some words contribute to predictions, and others do not (Shen et al., 2018).", "In many settings, however, examining individual words in isolation provides limited insight.", "One solution is to ask the model to generate an explanation along with each prediction (Zhang et al., 2016); inconveniently, explanations must be available during training.", "Alternatively, explanations may be selected from within the document itself.", "This strategy is closely related to question answering and extractive summarization, in which text spans are selected to answer a given question or summarize a document, respectively.", "If correct spans are known during training, representations of candidate spans can be generated and used to evaluate each span as the possible answer to a question, or for inclusion in a document summary.", "Representations for all short spans can be generated via bidirectional recurrent neural networks (Lee et al., 2016), for example, or candidate spans can be limited to individual words and sentences (Cheng and Lapata, 2016).", "Clinical notes contain redundant information as well as medical jargon and abbreviations, making meaningful text extraction more useful but also more challenging.", "Concept recognition and relation detection have been used to identify salient note content, which is then used to create a summary (Liang et al., 2019).", "Alternatively, the importance of specific content can be evaluated based on its presence or absence in subsequent notes; this concept has been used to train extractive summarization models using discharge summaries, which distill information collected during a clinical encounter (Alsentzer and Kim, 2018), and using subsequent notes, which are more likely to repeat earlier information if it is important (Liu et al., 2018).", "In contrast to these methods, our focus is on extracting predictive text in settings where span annotations are costly to obtain.", "(Lei et al., 2016) tackle this by introducing two networks, a generator and an encoder, which, respectively, filter for important words before making a prediction.", "However, theirs is a sampling-based method that must be trained via REINFORCE.", "Moreover, unlike our approach, they are unable to score individual phrases, limiting interpretability.", "Our work is perhaps most closely related to (Bastings et al., 2019), which defines candidate spans using a modified Kumaraswamy distribution and then selects spans that are predictive via fused LASSO.", "Instead, our approach uses an attention mechanism to identify promising start and end positions, which are then used to construct spans nonparametrically.", "Lastly, another approach is the prediction-constrained topic model, which provides interpretable topics that are useful for predicting labels of interest (Ren et al., 2019; Hughes et al., 2017).", "We define predictive extraction as follows.", "Given a document X and its associated binary label y , the goal of predictive extraction is to select contiguous sequences of text called spans that, jointly, are sufficient to predict the label y effectively.", "One wishes to also assign each span a score reflecting its contribution to the prediction y .", "In this work, span selection is regularized by quantifying span size and overlap among spans, and performance is evaluated via human rating of randomly selected spans.", "The architecture for the proposed SpanPredict model is given in Figure 1.", "For a given passage of text, let t = 1 , . . . , T index token s t , and let e t RD denote an embedding of token s t .", "Note that the e t may be linear token embeddings, but may also be contextualized embeddings generated by BERT (Devlin et al., 2018), for example.", "For each embedding e t , two probability vectors p = softmax (cid:0) E w p (cid:1) and q = softmax (cid:0) E w q (cid:1) , where E = [ e 1 , . . . , e T ] , are computed using a pair of trainable, sentinel attention vectors w p , w q RD .", "Vectors p = [ p 1 , . . . , p T ] T 1 and q = [ q 1 , . . . , q T ] T 1 , where T 1 is the T 1 simplex, represent the set of probabilities of each token in the sequence being the start and end of a span of text, respectively.", "While it is tempting to create a span by choosing the start and end positions with highest probabilities, i.e. , arg max t p and arg max t q , respectively, this is problematic since the arg max function is not differentiable, precluding training by standard backpropagation.", "To produce a span representation r that is amenable to backpropagation, we employ the cumulative sum function cumsum ( x ) : x RT 7 c RT , where c t = P t t x t is an element of c .", "Using this function, we define p = cumsum ( p ) and q = cumsum ( q :: 1 ) , where x :: 1 is the vector x with its elements reversed.", "Intuitively, p t (ele-ment of p ) represents the probability that the start of a span has occurred by token t when coming from the left of the sequence and q t (element of q ) represents the probability that the end has occurred by token t when coming from the right.", "We then calculate a set of weights r = p q , where denotes the element-wise product.", "The product r therefore assigns large weights to tokens which have high mass under both p and q , i.e. , those that are identified as falling between the start and end points of a span.", "Rather than directly using r to compute a span representation, we first normalize r = [ r 1 , . . . , r T ] such that its elements sum to 1.", "We define the elements of r as r t = r t / ( P t r t + ) and 10 8 is included for numeric stability, since r is zero everywhere if the support of p and q do not overlap, indicating a null span.", "Importantly, normalization allows us to compute a score that reflects each word's contribution to the span as a whole, regardless of the length of the overall sequence.", "We then construct a span representation m = Er RD , by taking an average of the embeddings E weighted by r .", "This method of constructing spans is a key feature of our model as it allows for span location and length to be dictated nonparametrically, driven only by the content within the identified spans and the quality of the predictions.", "We repeat this procedure J times to identify J spans m j , j = 1 , . . . , J , using unique pairs of sentinel vectors { w pj , w qj } for each span.", "Finally, we employ attention over the J span representations to generate span scores z j = w z m j .", "These scores are effectively logits , which can be interpreted as the log-odds of a positive label associated with the span.", "The output of the model, y = ( P j z j ) , where ( ) is the sigmoid function, is compared against the truth y , and the model is trained via backpropagation with binary cross-entropy loss.", "In this work, we pad or truncate documents, as appropriate, to have fixed length T .", "Tokens are mapped to dense vectors using 100-dimensional GloVe embeddings, which are then contextualized with three parallel convolutional layers with filters of kernel sizes K { 2 , 3 , 5 } prior to span selection (see Section 5.1 for details).", "We chose this simple approach over more complex embeddings, e.g., BERT, to focus on the quality of span extraction and its effect on classification performance rather than on maximizing performance per se .", "However, our approach is agnostic to the choice of embedding, and alternative embeddings may be used if desired.", "Our model already contains an implicit penalty for span size specifically, the greater the number of tokens over which the model averages to compute a span representations, the smaller the contribution of influential words to the span logits.", "Hence, the model should implicitly prefer to have spans that are concise and not overwhelmed with filler words.", "Further, our model naturally encourages sparsity of number of spans.", "Spans that do not carry meaning are biased towards generating weights z j of zero since, otherwise, they would inadvertently reduce the predictive performance.", "This also means that the model implicitly learns the number of spans required to make predictions on an individual document basis.", "In practice, we observed that spans identified by our model tend to be rather long and suffer from significant overlap, which suggests the need for an additional explicit penalty to make the spans more concise and distinct.", "Methods involving L 2 regularization on the magnitudes of r j or z j may shrink the spans or encourage sparsity, but they do not directly address the overlap issue.", "Thus, we seek a regularization method that directly compares spans r j with one another.", "Since vectors { r j } J j =1 each constitute a discrete probability distribution, a natural choice is to consider divergences between them.", "Among these, the generalized Jensen-Shannon divergence (JSD) (Lin, 1991), a symmetric measure of similarity among a set of J probability distributions, is appealing for several reasons.", "The JSD is defined as JSD ( r 1 , . . . , r J ) = H JX j =1 j r j | {z } span overlap JX j =1 j H ( r j ) | {z } span conciseness , (1) where H ( ) denotes the entropy and = [ 1 , . . . , J ] J 1 is a distribution of mixing co-efficients among the J distributions { r j } Jj =1 (Lin, 1991).", "While the JSD is commonly expressed as a weighted average of Kullback-Leibler divergences (Manning et al., 1999), in this form, we emphasize that the JSD can be decomposed into two terms: the entropy of the (weighted) average of the r j s and the (weighted) average of the entropies of each r j .", "Thus, by maximizing the JSD, we simultaneously maximize the entropy of the average distribution ( i.e. , minimize overlap between the r j s) while minimizing the entropy of each r j ( i.e. , maximize conciseness of each r j ).", "In addition, the JSD is bounded below and above by 0 and log( J ) , respectively, allowing one to monitor convergence during training (see Appendix C) (Lin, 1991).", "where we recover (1) when = 0 .", "5 .", "As we slide closer to 0 , the contribution of the second term increases; hence, the smaller the value of , the smaller we can expect the entropies of the individual distributions to be.", "This implies that the span sizes can be made smaller by reducing .", "Lemma 3.1.", "The modified JSD is bounded above by a constant, independent of the entropies of the individual { r j } Jj =1 .", "Proof.", "Defining H 1 = H (cid:16)P Jj =1 j r j (cid:17) and H 2 = P Jj =1 j H ( r j ) , we have: JSD ( r 1 , . . . , r J ; ) = = 2 { H 1 (1 ) H 2 } = 2 { H 1 H 2 } 2(1 2 ) H 2 = 2 JSD ( r 1 , . . . , r J ) 2(1 2 ) H 2 2 JSD ( r 1 , . . . , r J ) , (3) where the last line follows from the fact that 1 2 0 [0 , 0 . 5] and H 2 0 .", "(cid:4)", "This result provides a lower bound on our JSD objective, useful for monitoring convergence during training, i.e. , JSD ( r 1 , . . . , r J ; ) 2 log( J ) .", "The complete objective function we aim to minimize is thus given by:", "L = ED (cid:2) (1 ) (cid:0) y log y + (1 y ) log(1 y ) (cid:1) + JSD( r 1 , r 2 , . . . , r J ; )] (4)", "where D is our dataset, and [0 , 1) is a hy-perparameter denoting the weight of the modified JSD penalty relative to the classification loss.", "For simplicity, we choose to take j = 1 /J in (2) and have therefore omitted from the expression for JSD ( r 1 , . . . , r J ; ) in (4).", "Aside from the learning rate, our model consists of only three hyperparameters J , , and , making it highly attractive for experimentation.", "Predictive performance is not very sensitive to the choice of J ; here we select J to be proportional to the average document length in each dataset, but we investigate the impact of a fixed larger value of J in Appendix B. To choose , we employ a method similar to that used in (Smith, 2017) for choosing a learning rate.", "Specifically, we slowly ramp up from a minimum value of 0 in increments of 10 5 batch by batch and monitor validation accuracy.", "When the accuracy starts to level off or drop, we mark the value of ; we found = 0 .", "1 to be appropriate for our datasets.", "Parameter is selected via cross-validation (trading off performance for desired span length), and is a focus of our experiments, described below.", "Datasets We perform experiments on five datasets: two publicly available non-medical datasets, and three constructed from clinical notes from the Duke University Health System.", "We consider the IMDb movie reviews dataset 1 (Maas et al., 2011), which contains 25,000 training and testing 1 https://www.tensorflow.org/datasets/ catalog/imdb_reviews B a s e li n e N o p e n a l t y 0 .", "examples of movie reviews and a binary viewer rating; and the Amazon Fine Food Reviews dataset 2 (McAuley and Leskovec, 2013), which contains > 500,000 reviews of food items, which we subsample to 25,000 training and testing examples and 5000 validation examples for consistency.", "Positive and negative examples are balanced in each subset.", "Reviews are on a 5-point scale, but we binarize by labeling ratings of 3 or higher as positive.", "Average document length for IMDb is 225 .", "4 166 .", "1 tokens, and shorter for Amazon at 84 .", "3 86 .", "1 tokens.", "The three medical datasets were built by sampling the clinical progress notes of children visiting the Duke University Health System between October 1, 2013 and October 1, 2018.", "All analyses were approved by the Duke University Institutional Review Board.", "Diagnosis codes (ICD-9/10) were used to identify patients eventually diagnosed with autism spectrum disorder (ASD), attention deficit hyperactivity disorder (ADHD), or asthma.", "Notes from each patient group were then selected at random and labeled as positive for the condition corresponding to that group.", "While many of these notes are not directly related to the condition of interest, a large proportion contain related information or risk factors.", "Future work will focus on extracting predictive spans from all notes from a given patient; here we focus on individual notes to limit complexity and highlight span extraction performance.", "For each diagnosis prediction task, we then selected notes from age-matched controls not diagnosed with the condition as of October 1, 2018, and assigned them a negative label.", "Each dataset contains an even number of positive and 2 https://www.kaggle.com/snap/ amazon-fine-food-reviews negative examples.", "Descriptive statistics are shown in Table 1.", "We first establish baseline performance for each dataset by training a CNN-based classifier that replaces span detection with max-pooling of all filter activations, but that is otherwise identical to SpanPredict.", "Pooled activations are fed into a linear layer that predicts the log-odds of a positive label.", "Our baseline model was motivated by our goal to understand how the SpanPredict module affects performance and highlight its flexibility with many baseline models, rather than to maximize performance, per se .", "A CNN-baseline was preferred over a BiLSTM, as the latter contains a context window of infinite length.", "Thus, a contiguous contiguous sequence of tokens can contain information from tokens outside the window, making span identification and interpretation difficult.", "Our baseline is closely related to hierarchical SWEM (Shen et al., 2018), and despite its simplicity, achieves an accuracy of 86.3% on IMDb, which is competitive against recent benchmarks (Papers with Code, 2020; Zhang et al., 2018).", "As shown in figure 2a, this same model achieves an AUC of 0.938.", "To contextualize GloVe embeddings, we apply C = 3 parallel convolutional layers, each of filter size F = 50 , stride S = 1 , kernel sizes K { 2 , 3 , 5 } and with ReLU activations.", "Tokens are padded such that the output of each convolution is of length T .", "We then concatenate the filters to obtain refined embeddings e t RCF , which are fed into the span detection module.", "Omitting the token embedding matrix, our model contains 100 (2 + 3 + 5) F + C F parameters in the convolutional layers and 2 J C F parameters in the span detection filters.", "Thus, SpanPredict contains 2 J C F more parameters than our baseline model, and 50 , 000 parameters in total.", "We take a step-wise approach to assessing model hyperparameters by first training with only binary cross entropy loss ( = 0 ).", "We then train three models with = 0 .", "1 chosen by comparing baseline performance on { 0 .", "01 , 0 .", "05 , 0 .", "1 , 0 .", "2 } and a maximum of J spans, where J is proportional to the average document length in the dataset.", "For IMDb, we choose 4; for Amazon, 3; and for all diagnoses, 7.", "Within this set of three, we vary across the values { 0 .", "5 , 0 .", "475 , 0 .", "45 , 0 .", "4 , 0 .", "25 } to assess the impact of the JSD penalty on span size and prediction performance.", "In Appendix B, we show results when J is increased to 10.", "For each experiment, we summarize classification performance using area under the ROC curve (AUC, for span size) and intersection over union (IoU, for span overlap).", "However, our goal is not to maximize classification performance, but rather to maintain good performance while also providing distinct, concise spans and scoring them accurately.", "To evaluate our span selection, we ( a ) quantify average span length and overlap for each model; ( b ) evaluate model-based span scoring, for which we have no ground truth, by having human raters score a random sample of spans; and ( c ) show a large number of spans selected by our models, which may be evaluated qualitatively (Appendix A).", "For IMDb and Amazon, samples for human evaluation were selected by first filtering for correctly labeled spans ( z ij < 0 when y i = 0 , where i indexes documents in the testing set and j indexes spans; and vice versa).", "The remaining spans were divided by z ij into quantiles, and 40 samples were drawn from each (to ensure a roughly uniform distribution of scores).", "We recruited 3 native English speakers to rate each span on a 5-point scale (very negative, negative, neutral, positive, very positive).", "A similar procedure was used to select spans from each medical dataset.", "Here, we only considered correctly labeled, condition-positive notes ( y i = 1 ), since condition-negative notes ( y i = 0 ) are marked by the absence of information related to the diagnosis more than the presence of information denying it.", "To mitigate rater fatigue, we sampled 20 spans per quantile, per condition, rather than 40.", "Three neurology or psychiatry residents rated each span on a 5-point scale.", "Raters were asked to grade the conditional probability of seeing the span given that the patient has the condition.", "5.2 Training SpanPredict was built in Python using Tensorflow 2.1 and trained on a single NVIDIA Titan Xp GPU.", "We use the Adam optimizer with default values of = 0 .", "001 , = 10 7 , 1 = 0 .", "9 , and 2 = 0 .", "999 .", "Parameters are randomly initialized from N (0 , 0 . 05) for the convolutional layers and N (0 , 0 . 5) for the span detection layers.", "To regularize training, we employ Dropout (Srivastava et al., 2014); after selecting , Dropout rates of { 0 .", "1 , 0 .", "25 , 0 .", "5 , 0 .", "7 } were tested and 0 .", "5 was chosen.", "We train each of our models with a batch size of 8 for 300 epochs.", "Our model complexity is linear in space and time with respect to J .", "We report performance using the model stored at the epoch with the lowest overall validation loss.", "To allow the model to warm up to the JSD penalty, we linearly increase from 0 to 0 .", "1 over 150 epochs and then fix its value to 0 .", "1 for the remainder of the experiment.", "We use the Keras tokenizer with a vocabulary size of 30,000 to tokenize our text and pad or truncate each sequence to a maximum length of 512 tokens.", "In Figure 2, we describe trends in performance.", "Baseline AUCs are provided in the caption.", "Note that lower AUCs for diagnosis prediction reflect the comparative difficulty of these tasks.", "Figure 2a shows performance relative to the baseline model for varying JSD penalties.", "Performance decreases up to 6% as the penalty increases, with the exception of ASD, on which the model performs about as well as or better than baseline for [0 . 4 , 0 . 475] .", "Thus, while some information may be lost during summarization, depending on the dataset, summarization may also serve to denoise the text, improving predictive performance.", "From Figure 2b, we find that as the penalty is increased, spans become considerably shorter.", "Inspecting the results when = 0 .", "25 , we found that the model tends to focus in on key words rather than phrases.", "From Figure 2c, we see that overlap also shrinks with span size.", "The effect is more rapid for the medical datasets, likely because the non-medical passages contain text throughout that is relevant to the sentiment of the passage, whereas medical notes contain information not relevant to the prediction task.", "A notable exception is asthma, Figure 3: Example spans in the IMDb (top, positive sentiment) and Amazon (bottom, mixed sentiment) datasets.", "which maintains a relatively constant span size and overlap, suggesting that diagnosing asthma requires identifying specific phrases ( e.g. , short-ness of breath) that cannot be decomposed into individual words.", "Finally, we demonstrate in Appendix B that, for J = 10 , AUC is, on average, greater but at the cost of greater sensitivity to .", "Figure 3 provides an illustration of individual spans inferred by SpanPredict ( = 0 . 5 ).", "In the IMDb example (top), we see that the model captures two highly positive spans, each constituting 30-35% of the note, with words such as profes-sional, laughed, and appreciate appearing in the red span.", "SpanPredict is also able to capture meanings of complex positive phrases, such as \"chock full\", \"sure handed,\" \"none of the over the top,\" and \"time has come.\"", "The blue and green spans each cover only a single word; however, these words flawless and beautifully have significant positive connotation.", "This is a feature our model shares with (Shen et al., 2018), which also picks out individual tokens.", "The Amazon review (Figure 3, bottom) contains mixed sentiment.", "The green span contains the word quality, which, akin to words such as care or workmanship, is slightly positive.", "However, the blue span is filled with negative phrases.", "This is reflected in the z j scores in the inset plot, which are added to predict the log-odds of a positive label.", "We find that z j is negative for the blue span while positive for the green span.", "The orange span is most negative, suggesting that the model is able to synthesize information from the blue and green spans it overlaps to extract an overall meaning.", "Figure 4 shows the human evaluation results.", "For each span, we computed the median rating among the 3 reviewers and performed a non-parametric ANOVA (Kruskal-Wallis test) to assess agreement with model-predicted scores.", "Statistically significant differences in means ( p < 0 . 001 ) were present in the IMDb, Amazon, and ADHD datasets, but not the ASD and Asthma datasets.", "Given our model's high agreement with human raters in the IMDb and Amazon tasks, the lower agreement observed on the medical diagnosis tasks may indicate that our model is identifying descriptive risk factors not familiar to our clinical raters.", "This hypothesis, which was suggested by our clinical collaborators, will be explored further in subsequent work.", "To measure inter-rater reliability, we computed Cohen's kappa for each dataset IMDb: 0.73, Amazon: 0.78, ASD: 0.53, ADHD: 0.64, Asthma: 0.63.", "These values illustrate the difficulty of evaluating the clinical notes compared to the review datasets.", "7 Conclusions We have introduced the task of predictive extraction , in which document labels are predicted from extracted contiguous segments of text called spans .", "We presented SpanPredict, which constructs span representations nonparametrically from contextualized embeddings by predicting start and end positions using linear attention.", "Our model is straightforward to tune, and assigns interpretable span scores that are added together to predict the log-odds of a positive label.", "Model performance and span quality are evaluated on two non-medical and three medical datasets.", "Notably, we observe high correlation between human span ratings and model-predicted span scores, particularly in the non-medical datasets, illustrating that our model selects meaningful spans and scores them accurately.", "Discrepancies between human ratings and model predictions in the medical datasets may suggest that our model is identifying condition-specific risk factors that are unfamiliar to trained clinicians.", "Future work will consider prediction and span extraction from a collection of documents rather than individual documents, allowing descriptive risk factors to be extracted from patient medical histories.", "Clinical findings consistently highlighted by SpanPredict will be analyzed as possible risk factors via standard statistical methods.", "Additionally, whereas SpanPredict identifies a set of spans sufficient to predict the label, future work will explore methods for ensuring that all predictive spans are identified.", "This paper introduced the problem of predictive extraction, which attempts to identify distinct spans of text within a document that, taken together, are sufficient to predict its associated label.", "Its positive impact can best be described within the context of disease classification from narrative clinical text.", "For example, ASD is a classically difficult condition to diagnose, as its symptoms are often behavioral, rather than physiological, making clinical notes critical for classification.", "Focus on classification alone, however, is not sufficient, as a clinical decision support tool requires a level of interpretability to assure clinicians that the model is not relying on data artifacts that are not clinically meaningful or generalizable.", "This requirement is present in many document classification tasks, including the scoring of food or movie reviews.", "Our newly introduced algorithm, SpanPredict, addresses this need by identifying important and unlabeled predictive phrases without substantially worsening classification performance.", "As such, SpanPredict can be used as a real-time decision aid, providing narrative summaries optimized for disease classification, thus leading to faster diagnoses and long-term improvements in function, while minimizing healthcare cost and utilization.", "While the positive impact of our contribution is clear, there are potential negative consequences related to biases in training.", "When algorithms are trained on patient datasets that are incomplete or under-/mis-representative of certain populations, they can develop discriminatory biases in their outcomes.", "When considering clinical notes, there is also potential for biased language in patient medical records related to race and ethnicity, including perpetuating of negative stereotypes, blaming a patient for their symptoms, or casting doubt on patient reports and experience.", "This biased language likely changes the context of words and may negatively impact classification performance.", "This is of particular importance in ASD, where white children with ASD receive their diagnoses substantially earlier Black children with ASD.", "Ignoring these biases might create self-fulfilling prophecies that confirm existing social biases or create new applications of bias altogether.", "In light of these negative impacts, it will become critical to evaluate the performance of SpanPredict in various populations prior to being put in production, so that all biases are well-characterized.", "Nonetheless, the overall impact of the paper is a net positive as it advances the field of interpretable document classification, using a novel methodology that only requires labels for the classification.", "This work was funded by NIMH R01 MH121329 (Geraldine Dawson and Guillermo Sapiro, Co-PI).", "We gratefully acknowledge the conceptual input of Guillermo Sapiro, Geraldine Dawson, and Scott Kollins in this work." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "method", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "method", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "method", "abstain", "result", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "result", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Recent works of opinion expression identification (OEI) rely heavily on the quality and scale of the manually-constructed training corpus, which could be extremely difficult to satisfy.", "Crowdsourcing is one practical solution for this problem, aiming to create a large-scale but quality-unguaranteed corpus.", "In this work, we investigate Chinese OEI with extremely-noisy crowdsourcing annotations, constructing a dataset at a very low cost.", "Following Zhang et al. (2021), we train the annotator-adapter model by regarding all annotations as gold-standard in terms of crowd annotators, and test the model by using a synthetic expert, which is a mixture of all annotators.", "As this annotator-mixture for testing is never modeled explicitly in the training phase, we propose to generate synthetic training samples by a pertinent mixup strategy to make the training and testing highly consistent.", "The simulation experiments on our constructed dataset show that crowdsourcing is highly promising for OEI, and our proposed annotator-mixup can further enhance the crowdsourcing modeling.", "Opinion mining is a fundamental topic in the natural language processing (NLP) community, which has received great attention for decades (Liu and Zhang, 2012).", "Opinion expression identification (OEI) is a standard task of opinion mining, which aims to recognize the text spans that express particular opinions (Breck et al., 2007).", "Figure 1 shows two examples.", "This task has been generally solved by supervised learning (Irsoy and Cardie, 2014) with the well-established corpus annotated by experts.", "Almost all previous studies are based on English datasets such as MPQA (Wiebe et al., 2005).", "by no means an easy process.", "It is highly ambiguous across different persons.", "As shown in Figure 1, it is very controversial to define the boundaries of opinion expressions (Wiebe et al., 2005).", "Actually, this problem is extremely serious for languages such as Chinese, which is based on characters even with no explicit and clearly-defined word boundaries.", "Thus, Chinese-alike languages will inevitably involve more ambiguities.", "In order to obtain a high-quality corpus, we usually need to train the annotators with great efforts, making them acquainted with a specific fine-grained guideline drafted by experts, and then start the data annotation strictly.", "Finally, it is better with a further expert checking on borderline cases where the annotators disagree most to ensure the quality of the annotated corpus.", "Apparently, the whole process is quite expensive.", "Thus, crowdsourcing with no training (just a brief guideline) and no expert checking is more practical in real considerations (Snow et al., 2008).", "While on the other hand, the difficulty of the Chinese OEI task might lead to very low-quality annotations by crowdsourcing.", "In this work, we present the first study of Chinese OEI by using crowdsourcing.", "We manually construct an OEI dataset by crowdsourcing, which is used for training.", "Indeed, the dataset is cheap but with a great deal of noises according to our initial observation.", "We also collect the small-scale devel-2801 opment and test corpus with expert annotations for evaluation.", "1 Our dataset is constructed over a set of Chinese texts closely related to the COVID-19 topic.", "Following, we start our investigation by using a strong BERT-BiLSTM-CRF model, treating the OEI task as a standard sequence labeling problem following the previous studies (Breck et al., 2007; Irsoy and Cardie, 2014; Katiyar and Cardie, 2016).", "Our primary goal is to answer whether these extremely-noisy crowdsourcing annotations include potential value for the OEI task.", "In order to make the best use of our crowdsourcing corpus, we follow Zhang et al. (2021) to treat all crowd annotations as gold-standard in terms of different annotators.", "We introduce the annotator-adapter model, which employs the crowdsourcing learning approach of Zhang et al. (2021) in OEI for the first time.", "It jointly encodes both texts and annotators, then predicts the corresponding crowdsourcing annotations in the BERT-BiLSTM-CRF architecture.", "Concretely, we train the annotator-adapter model by each individual annotator and the corresponding annotations, then test the model by using a pseudo expert annotator, which is a linear mixture of crowd annotators.", "Considering that this expert is never modeled during the training, we further exploit a simple mixup (Zhang et al., 2018) strategy to simulate the expert decoding accurately.", "Experimental results show that crowdsourcing is highly competitive, giving an overall F1 score of 53 .", "86 even with a large-scale of noises, while the F1 score of expert corpus trained model is 57 .", "08 .", "We believe that this performance gap is totally acceptable for building OEI application systems.", "In addition, our annotator-mixup strategy can further boost the performance of the annotator-adapter model, giving an F1 increase of 54 .", "59 53 .", "86 = 0 .", "73 .", "We conduct several analyses to understand the OEI with crowdsourcing and our suggested methods comprehensively.", "We present the initial work of investigating the OEI task with crowdsourcing annotations, showing its capability on Chinese.", "We construct a Chinese OEI dataset with crowdsourcing annotations, which is not only valuable for Chinese OEI but also instructive for crowdsourcing researching.", "1 In addition, we provide expert annotations of trainset to train a upper-bound model.", "We introduce the annotator-adapter for crowdsourcing OEI and propose the annotator-mixup strategy, which can effectively improve the crowdsourcing modeling.", "The outbreak of COVID-19 brings strong demand for building robust Chinese opinion mining systems, which are practically built in a supervised manner.", "A large-scale training corpus is the key to the system construction, while almost all existing related datasets are in English (Wiebe et al., 2005).", "Hence, we manually construct a Chinese OEI dataset by crowdsourcing.", "We focus on opinion expressions with positive or negative polarities only.", "The construction consists of four steps: (1) text collection, (2) annotator recruitment, (3) crowd annotation, and (4) expert checking and correction.", "We choose the Sina Weibo 2 , which is a Chinese social media platform similar to Twitter, as our data source.", "To collect the texts strongly related to COVID-19, we select around 8k posts that are created from January to April 2020 and related to seven hot topics (Table 1).", "To make these posts ready for annotating, we use HarvestText 3 to clean them and segment the resulting texts into sentences.", "Next, we conduct another cleaning step to remove the duplicates and sentences with relatively poor written styles (e.g., high-proportion of non-Chinese symbols, very short /long length, etc.).", "After the above procedure, there are still a large proportion of sentences that involve no sentiment.", "2 https://weibo.com 3 https://github.com/blmoistawinde/HarvestText 2802 So we filter out them by a BERT sentiment clas-sifier that trained on an open-access Weibo sentiment classification dataset.", "4 Only sentences with high confidence of not expressing any sentiment are dropped, 5 we can therefore keep the most valuable contents while avoiding unnecessary annotations and thus reduce the overall annotating cost.", "We have five professionals who have engaged in the annotation of sentiment and opinion-related tasks previously and are with rich experience as experts.", "They annotate 100 sentences together as examples (i.e., label the positive and negative opinion expressions inside the texts), and establish a simple guideline based on their consensus after several discussions.", "The guideline includes the task definition and a description of annotation principle.", "6 Next, we recruit 75 (crowd) students in our uni-versity for annotating.", "They come from different grades and different majors, such as Chinese, Literature, and Translation.", "We offer them the above annotation guideline to understand the task.", "We choose the doccano 7 to build up our annotation platform, and let these annotators be familiar with our task by the expert-annotated examples.", "When all crowd workers are ready, we start the crowd annotation phase.", "The prepared texts are split into micro-tasks so that each one consists of 500 sentences.", "Then we assign 3 to 5 workers to each micro-task, and their identities are remained hidden from each other.", "Each worker will not access a new task unless their current one is finished.", "In the annotation of each sentence, workers need to label the positive and negative opinion expressions according to the guideline and their understandings.", "The number of positive or negative expressions in one sentence has no limit.", "They can also mark a sentence as No Opinion and skip it if they think there are no opinion expressions inside.", "After all crowd annotations are accomplished, we randomly select a small proportion of sentences and", "4 ChineseNlpCorpus weibo_senti_100k 5 Note that there are still a small number of sentences our final dataset that have no opinion expression inside.", "6 We share the guideline in the Appendix A. 7 https://github.com/doccano/doccano Dataset Number of Average Span Length Section Quality Unique Positive Negative Annotation Expression Expression Train crowd 32582 11640 35263 5.05 silver 8047 4167 11411 4.71 gold 8047 3488 10096 4.79 Dev crowd 3427 2338 3905 5.22 gold 803 706 1035 5.02 Test crowd 6265 3573 5290 4.48 gold 1517 999 1373 4.30 Table 2: Data statistics of our constructed dataset.", "let experts reannotate them, resulting in the gold-standard development and test corpus.", "8 Specifically, for each sentence, we let 2 experienced experts individually reannotate it with references from the corresponding crowdsourcing annotations.", "They will give the final annotation of each sentence if their answers reach an agreement.", "And if they have divergences, a third expert will help them to modify answers and reach the agreement.", "Then, we let all five experts go through the remaining dataset 9 , selecting the best annotations for each sentence, which can be regarded as the silver-standard training corpus.", "In the selection, Each sentence is assigned to 1 expert, and the expert is only allowed to choose one (or several identical) best answer(s) from all the candidate crowdsourcing annotations.", "Finally, only for comparisons, we also annotated the gold-standard training corpus, which will not be used in our model training.", "In the end, we arrive at 42 , 274 crowd annotations by 70 valid annotators, 10 covering 10 , 367 sentences.", "A total number of 803 + 1517 = 2320 sentences, including expert annotations, would be used for development and test evaluations.", "Table 2 shows the overall data statistics.", "The average number of annotators per sentence is 4 .", "05 , and each annotator labels an average of 827 sentences in the whole corpus.", "The overall Cohen's Kappa value of the crowd annotations is 0 .", "35 .", "When ignoring the 8 The corresponding crowdsourcing annotations consist of the crowdsourcing development and test corpus.", "in any expression, the Kappa is only 0 .", "17 .", "11 The Kappa values are indeed very low, indicating the great and unavoidable ambiguities of the task with natural annotations.", "12 However, these values do not make much sense since we do not impose any well-designed comprehensive guidelines during annotation.", "In fact, a comprehensive guideline for crowd workers is almost impracticable in our task, because they are quite often to disagree with a particular guideline by their own unique and naive understandings.", "If we impose such a guideline to them forcibly, the annotation cost would be increased drastically (i.e., at least ten times more expensive according to our preliminary investigation) for their reluctance as well as endless expert guidance.", "In the remaining of this work, we will try to verify the real value of these crowdsourcing annotations empirically: Is the collected training corpus really beneficial for our Chinese OEI task?", "The OEI task aims to extract all polarized text spans that express certain opinions in a sentence.", "It can be naturally converted into a sequence labeling problem by using the BIO schema, tagging each token by the boundary information of opinion expressions, where B-X and I-X (i.e., X can be either POS or NEG denoting the polarity) indicate the start and other positions of a certain expression, and O denotes a token do not belong to any expression.", "In this work we adopt the CRF-based system (Breck et al., 2007) to the neural setting and enhance it with BiLSTM encoder as well as pre-trained BERT representation.", "Given a sentence x = x 1 x n (where n denotes the sentence length), we first convert it into contextual representations r 1 r n by the pre-trained BERT with adapter tuning (Houlsby et al., 2019):", "Unlike the standard BERT exploration, ADBERT introduces two extra adapter modules inside each transformer layer, as shown in Figure 2 for the 11 To compute the Kappa value of sequential annotations,", "we treat each token (not sentence) as an instance, and then aggregate the results of one sentence by averaging.", "details.", "With this modification, we do not need fine-tuning all BERT parameters, and instead, learning the parameters of adapters is enough for obtaining a strong performance.", "Thus ADBERT is more parameter efficient.", "The standard adapter layer can be formalized as: down-proj: h mid = GELU( W down h in + b down ) , up-proj: h out = W up h mid + b up + h in , (2) where W down , W up , b down and b up are model parameters, which are much smaller than the parameters of transformer in scale, and the dimension size of h mid is also smaller than that of the corresponding transformer dimension.", "13 The rest part of the baseline is a standard BiLSTM-CRF model, which is a stack of BiLSTM, MLP and CRF layers, and then we can obtain sequence-level scores for each candidate output y : score( y ) = BiLSTM-CRF ([ r 1 r n ]) , p ( y ) = exp (cid:0) score( y ) (cid:1) (cid:80) Y exp (cid:0) score( y ) (cid:1) , (3) where p ( y ) is the probability of the given ground-truth, and (cid:101) Y is all possible outputs for score normalization.", "The model parameters are updated by the sentence-level cross-entropy loss L = log p ( y ) when y is regarded as gold-standard.", "13 The dimension sizes of h in and h out are consistent with the corresponding transformer hidden states.", "Crowdsourcing training.", "In the crowdsourcing setting, we only have annotations from multiple non-expert annotators, thus no gold-standard label is available for our training.", "To handle the situation, we introduce two straightforward and widely-used methods.", "First, we treat all annotations uniformly as training instances, despite that they may offer noises for our training objective, which is denoted by All for short.", "Second, we exploit majority voting 14 to obtain an aggregated answer of each sentence for model training, denoted as MV .", "In most previous crowdsourcing studies, there is a common agreement that crowd annotations are noisy, which should be rectified during training (Rodrigues et al., 2014a; Nguyen et al., 2017; Simpson and Gurevych, 2019).", "Zhang et al. (2021) propose to regard all crowdsourcing annotations as gold-standard, and introduce a representation learning model to jointly encode the sentence and the annotator and extract annotator-aware features, which models the unique understandings of annotators (this setting is indeed very consistent with our corpus).", "Since our constructed dataset has no gold-standard training labels 15 , we adopt their unsupervised representation learning approach, which is named annotator-adapter .", "It applies the Parameter Generator Network (PGN) (Platanios et al., 2018; Jia et al., 2019; stn et al., 2020) to generate annotator-specific adapter parameters for the ADBERT, as shown in Figure 3.", "Given an input sentence-annotator pair ( x = x 1 , . . . , x n , a ) , we exploit an embedding layer to convert the annotator ID a into its vectorial form e a , and then PGN is used to generate the model parameters of several high-level adapter layers inside BERT conditioned by e a .", "Concretely, we apply PGN to the last p layers of BERT, where p is one hyper-parameter of our model.", "We refer to PGN-ADBERT for the updated input representation.", "Formally, for an adapter defined by Equation 2, all its parameters are dynamically generated by: W down = TW down e a , b down = T b down e a , W up = TW up e a , b up = T b up e a , (4) 14 The voting is conducted at the token-level and then merge continuous tokens if they belong to a same-type expression.", "where TW down , T b down , TW up and T b up are learnable model parameters for the PGN-ADBERT.", "For any matrix-format model parameter W RM N , we have TW RM N d , where d is the dim of the annotator embedding.", "Similarly, for the vectorial parameter b RN , we have T b RN d .", "Thus, the overall input representation of the annotator-adapter can be rewritten as: r 1 r n = PGN-ADBERT ( x 1 x n , e a ) , (5) which jointly encodes the text and the annotator.", "At the training stage, it uses the embedding of crowd annotators to generate crowd model parameters to learn crowd annotations.", "At the inference stage, it uses the centroid point of all annotator embeddings to estimate the expert, predicting the high-quality opinion expressions for raw texts.", "This expert embedding can be computed directly by: e expert = 1 | A | (cid:88) a A e a , (6) where A represents all annotators.", "By scrutinizing the annotator-adapter model, we can find that there is a minor mismatch during the model training and testing.", "During the training, the input annotators are all encoded individually.", "While during the testing, the input expert is a mixture of the crowd annotators, which is never modeled.", "To tackle this divergence, we introduce the 2805 mixup (Zhang et al., 2018) strategy over the individual annotators to generate a number of synthetic samples with linear mixtures of annotators, making the training and testing highly similar.", "The mixup strategy is essentially an effective data augmentation method that has received increasing attention recently in the NLP community (Zhang et al., 2020; Sun et al., 2020).", "The method is applied between two individual training instances originally, by using linear interpolation over a hidden input layer and the output.", "In this work, we confine the mixup onto the two training instances with the same input sentence for annotator mixup.", "Formally, given two training instances ( x 1 a 1 , y 1 ) and ( x 2 a 2 , y 2 ) , the mixup is executed only when x 1 = x 2 , thus the interpolation is actually performed between ( a 1 , y 1 ) and ( a 2 , y 2 ) .", "Concretely, the input interpolation is conducted at the embedding layer, and the output interpolation is directly mixed at the sentence-level: e mix = e a 1 + (1 ) e a 2 , y mix = y 1 + (1 ) y 2 , (7) where [0 , 1] is a hyper-parameter which is usually sampled from the Beta( , ) distribution, and y is the one-hot vectorial form, where [1 , 2 , mix ] .", "16 Finally, the loss objective of the new instance is calculated by: L mix = log exp (cid:0) score( y mix ) (cid:1) (cid:80) (cid:101) Y exp (cid:0) score( y ) (cid:1) , (8) where all scores are computed based on x 1 / x 2 and e mix , and (cid:101) Y is all possible outputs for x 1 / x 2 .", "Finally, we can produce a number of augmented instances by the annotator mixup.", "These instances, together with the original training instances, are used to optimize our model parameters.", "The enhanced model is able to perform inference more robustly by using the mixture (i.e, average) of annotators, which is the estimation of the expert.", "Evaluation.", "We use the span-level precision (P), recall (R) and their F1 for evaluation, since OEI is essentially a span recognition task.", "Following Breck et al. (2007); Irsoy and Cardie (2014), we 16 Note that y is at the sentence-level, where the dimension size is the number of all possible outputs of the given input.", "exploit three types of metrics, namely exact matching, proportional matching and binary matching, respectively.", "The exact metric is straightforward and has been widely applied for span-level entity recognition tasks, which regards a predicted opinion expression as correct only when its start-end boundaries and polarity are all correct.", "Here we exploit the exact metric as the major method.", "The two other metrics are exploited because the exact boundaries are very difficult to be unified even for experts.", "The binary method treats an expression as correct when it contains an overlap with the ground-truth expression, and the proportional method uses a balanced score by the proportion of the overlapped area referring to the ground-truth.", "We use the best-performing model on the development corpus to evaluate the performance of the test corpus.", "All experiments are conducted on a single RTX 2080 Ti card at an 8-GPU server with a 14 core CPU and 128GB memory.", "We run each setting by 5 times with different random seeds, and the median evaluation scores are reported.", "Hyper-parameters.", "We exploit the bert-base-chinese for input representations.", "17 The adapter bottleneck size and the BiLSTM hidden size are set to 128 and 400, respectively.", "For the annotator-adapter, we set the annotator embedding size d = 8 and generate the adapter parameters for the last p = 6 BERT layers.", "For the annotator mixup, we set of the Beta( , ) distribution to 0 .", "5 .", "We apply the sequential dropout to the input representations, which randomly sets the hidden vectors in the sequence to zeros with a probability of 0 .", "2 , to avoid overfitting.", "We use the Adam algorithm to optimize the parameters with a con-stant learning rate 1 10 3 and a batch size 64 , and apply the gradient clipping mechanism by a maximum value of 5 .", "0 to avoid gradient explosion.", "Baselines.", "Two annotator-agnostic baselines (i.e., ALL and MV ) and the silver-corpus trained model Silver are all implemented in the same baseline structure and hyper-parameters.", "We also implement two annotator-aware methods presented in Nguyen et al. (2017), where the annotator-dependent noises have been modeled explicitly.", "The LSTM-Crowd model encodes the output label bias (i.e., noises) for each individual annotator (biased-distributions) towards the expert (zeroed-distribution), and the LSTM-Crowd-cat model 17 https://github.com/google-research/bert 2806 Method Exact Proportional Binary P R F1 P R F1 P R F1 Gold 61.12 53.54 57.08 81.97 72.28 76.82 85.79 77.51 81.44 Silver 55.27 53.25 54.24 75.79 73.01 74.37 81.23 78.25 79.71 ALL 61.06 45.49 52.14 82.47 61.44 70.42 86.98 64.80 74.27 MV 53.95 50.97 52.42 74.23 70.13 72.12 78.98 74.62 76.74 LSTM-Crowd (Nguyen et al., 2017) 60.55 47.68 53.35 83.79 61.32 70.82 88.71 64.92 74.98 LSTM-Crowd-cat (Nguyen et al., 2017) 59.07 47.51 52.66 77.56 62.39 69.15 83.70 67.33 74.63 BSC-seq (Simpson and Gurevych, 2019) 40.80 59.27 48.33 55.35 82.41 66.23 60.66 90.33 72.58 Annotator-Adapter (Zhang et al., 2021) 61.08 48.16 53.86 81.70 65.40 72.65 87.20 69.81 77.55 Annotator-Adapter + mixup 61.27 49.22 54.59 81.82 68.30 74.45 87.02 71.48 78.49 Table 3: The test results, where all methods are backended by BERT-BiLSTM-CRF for a fair comparison.", "applies a similar idea but implementing at the BiLSTM hidden layer.", "During the testing, zero-vectors are exploited to simulate the expert accordingly.", "Their main idea is to reach a robust training on the noisy dataset, which is totally different from our approach.", "In addition, we aggregate crowd labels of the training corpus by a Bayesian inference method (Simpson and Gurevych, 2019), namely BSC-seq , based on their code 18 and then evaluate its results with the same BERT-BiLSTM-CRF architecture.", "Table 3 shows the test results on our dataset.", "In general, the exact matching scores are all at a relatively low level, demonstrating that precise opinion boundaries are indeed difficult to identify.", "With the gradual relaxation of metrics (from exact to binary ), scores are increased accordingly, showing that these models can roughly locate the opinion expressions to a certain degree.", "Dataset comparison.", "Similar to the tasks like NER (Zhou et al., 2021), POS tagging, dependency parsing (Straka, 2018) and so on, in which English models have performed better than the Chinese, we see the same pattern in our OEI task.", "The exact matching F1 57 .", "08 of the Gold corpus trained model still has a performance gap compared with that of the English MPQA dataset (i.e., 63 .", "71 by a similar BERT-based model of Xia et al. (2021)).", "This may due to (1) the opinion boundaries in the word-based English MPQA are easier to locate than our character-based Chinese dataset; (2) the social media domain of our dataset, is more difficult than the news domain of MPQA.", "18 https://github.com/UKPLab/arxiv2018-bayesian-ensembles Method comparison.", "First, we compare two annotator-agnostic methods (i.e., All and MV ) with annotator-aware ones (i.e., the rest of models).", "As shown in Table 3, we can see that annotator-aware modeling is effective as a whole, bringing better performance on exact matching.", "In particular, our basic annotator-adapter model is able to give the best F1 among these selected baselines, demonstrating its advantage in crowdsourcing modeling.", "When the annotator-mixup is applied, the test scores are further boosted, showing the effectiveness of our annotator mixup.", "The overall tendencies of the two other metrics are similar by comparing our models with the others.", "Our final performance is not only comparable to the silver corpus trained model, which we can take it as a weak upper-bound.", "but also close to the upper-bound model with expert annotations (i.e., Gold ).", "Thus, our result for Chinese OEI is completely acceptable, demonstrating that crowdsourcing annotations are indeed with great value for model training.", "The observation indicates that crowdsourcing could be a highly-promising alternative to build a Chinese OEI system at a low cost.", "Here we conduct fine-grained analyses to better understand the task and these methods in-depth, where the evaluation by exact matching is used in this subsection.", "There are several additional analyses which are shown in the Appendix.", "Performance by the opinion expression length.", "Intuitively, the identification of opinion expressions can be greatly affected by the length of the expressions, and longer expressions might be more challenging to be identified precisely.", "Figure 4 shows 2807 1-2 3 4 5 6 7 8+ 25 30 35 40 MV LSTM-crowd Annotator Adapter + mixup Figure 4: F1 scores of exact matching in terms of the opinion expression length.", "the F1 scores in terms of expression lengths by the four models we focused.", "We can see that the F1 score decreases dramatically when the expression length becomes larger than 4, which is consistent with our intuition.", "In addition, the annotator-adapter model is better than previous methods, and the mixup model can reach the best performance on almost all the categories, indicating the robustness of our annotator mixup.", "Influence of the opinion number per sentence.", "One sentence may have more than one opinion expressions, where these opinions might be mutually helpful or bring increased ambiguities.", "It is interesting to study the model behaviors in terms of opinion numbers.", "Here we conduct experimental comparisons by dividing the test corpus into three categories: (1) only one opinion expression exists in a sentence; (2) at least two opinions exist, and they are of the same sentiment polarity; (3) both positive and negative opinion expressions exist.", "As shown in Figure 5, the sentences with multiple opinions of a consistent polarity can obtain the highest F1 score.", "The potential reason might be that the expressed opinions of these sentences are usually highly affirmative with strong sentiments, and the consistent expressions can be mutually helpful according to our assumption.", "For the other two categories, it seems that they are equally difficult according to the final scores.", "For all three categories, two annotator-adapter models demonstrate better performance than the others.", "Self-evaluation of crowd annotators.", "The annotator adapter uses a pseudo expert embedding to predict opinion expressions and evaluate performance on the gold-standard annotations of experts.", "It is interesting to examine the self-evaluation performance on the crowd annotations of the test corpus as well.", "During the inference, we use the crowd O MOSP MOCP 47 50 53 56 Annotator Adapter + mixup MV LSTM-crowd Figure 5: F1 scores of exact matching by following three category sentences: (1) one-opinion ( O ), (2) multiple-opinion single-polarity ( MOSP ), and (3) multiple-opinion contradict-polarity ( MOCP ).", "annotators as inputs, and calculate the model performance on the corresponding crowd annotations.", "Table 4 shows the results.", "First, two annotator-agnostic models (i.e., ALL and MV ) have similar poor performance since they are trying to estimate the expert annotation function rather than learn crowd annotations.", "Second, the performance of two annotator-noise-modeling methods, LSTM-Crowd and LSTM-Crowd-cat , respectively, is close to the annotator-agnostic ones, showing that they are also incapable to model individual annotators.", "Then, our two annotator-adapter models achieve leading performance compared with all baseline methods, giving a significant gap (at least 47 . 79 41 . 97 = 5 . 82 in F1).", "They are more capable of predicting crowd annotations, demonstrating the ability to model the annotators effectively.", "To our surprise, the mixup annotator-adapter model does not exceed the basic one, indicating that the mixed annotator embeddings in training could slightly hurt the modeling of individual annotators.", "OEI is one important task in opinion mining (Liu, 2012), and has received great interests (Breck et al.,", "2007; Irsoy and Cardie, 2014; Xia et al., 2021).", "The early studies can be dated back to Wilson et al. (2005) and Breck et al. (2007), which exploit CRF-based methods for the task with manually-crafted features.", "SemiCRF is exploited next in order to exploit span-based features (Yang and Cardie, 2012).", "Recently, neural network models have attracted the most attention.", "Irsoy and Cardie (2014) present a deep bi-directional recurrent neural network (RNN) to identify opinion expressions.", "BiLSTM is also used in Katiyar and Cardie (2016) and Zhang et al. (2019), showing improved performance on OEI.", "Fan et al. (2019) design an Inward-LSTM to incorporate the opinion target information for identifying opinion expressions given their target, which can be seen as a special case of our task.", "Xia et al. (2021) employ pre-trained BERT representations (Devlin et al., 2019) to increase the identification performance of joint extraction of the opinion expression, holder and target by a span-based model.", "All the above studies are in English and based on the MPQA (Wiebe et al., 2005), or customer reviews (Wang et al., 2016, 2017; Fan et al., 2019) since there are very few datasets available for other languages.", "Hence, we construct a large-scale Chinese corpus for this task by crowdsourcing, and bor-row a novel representation learning model (Zhang et al., 2021) to handle the crowdsourcing annotations.", "In this work, we take the general BERT-BiLSTM-CRF architecture as the baseline, which is a competitive model for OEI task.", "Crowdsourcing as a cheap way to collect a large-scale training corpus for supervised models has been gradually popular in practice (Snow et al., 2008; Callison-Burch and Dredze, 2010; Traut-mann et al., 2020).", "A number of models are developed to aggregate a higher-quality corpus from the crowdsourcing corpus (Raykar et al., 2010; Ro-drigues et al., 2014a,b; Moreno et al., 2015), aiming to reduce the gap over the expert-annotated corpus.", "Recently, modeling the bias between the crowd annotators and the oracle experts has been demonstrated effectively (Nguyen et al., 2017; Simpson and Gurevych, 2019; Li et al., 2020), focusing on the label bias between the crowdsourcing annotations and gold-standard answers, regarding crowdsourcing annotations as annotator-sensitive noises.", "Zhang et al. (2021) do not hold crowdsourcing annotations as noisy labels, while regard them as ground-truths by the understanding of individual crowd annotators.", "In this work, we follow the idea of Zhang et al. (2021) to explorate our crowdsourcing corpus, and further propose the annotator mixup to enhance the learning of the expert representation for the test stage.", "We presented the first work of Chinese OEI by crowdsourcing, which is also the first crowdsourcing work of OEI.", "First, we constructed an extremely-noisy crowdsourcing corpus at a very low cost, and also built gold-standard dataset by experts for experimental evaluations.", "To verify the value of our low-cost and extremely-noisy corpus, we exploited the annotator-adapter model presented by Zhang et al. (2021) to fully explore the crowdsourcing annotations, and further proposed an annotator-mixup strategy to enhance the model.", "Experimental results show that the annotator-adapter can make the best use of our crowdsourcing corpus compared with several representative baselines, and the annotator-mixup strategy is also effective.", "Our final performance can reach an F-score of 54 .", "59% by exact matching.", "This number is actually highly competitive by referring to the model trained on expert annotations ( 57 . 08% ), which indicates that crowdsourcing can be highly recommendable to set up a Chinese OEI system fast and cheap, although the collected corpus is extremely noisy.", "We construct a large-scale Chinese opinion expression identification dataset with crowd annotations.", "We access the original posts by manually traversing the relevant Weibo topics or searching the corresponding keywords, and then copy and anonymize the text contents.", "All posts we collected are open-access.", "In addition, we also anonymize all annotators and experts (only keep the ID for the research purpose).", "All annotators were properly paid by their actual efforts.", "This dataset can be used for both the Chinese opinion expression identification task as well as crowdsourcing sequence labeling.", "We thank all reviewers for their hard work.", "This research is supported by grants from the National Key Research and Development Program of China (No. 2018YFC0832101) and the National Natural Science Foundation of China (No. 62176180)." ]
[ "abstain", "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "method", "method", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "method", "other", "other", "other", "other", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "We pioneer the first extractive summarization-based collaborative filtering model called ESCOFILT.", "Our proposed model specifically produces extractive summaries for each item and user.", "Unlike other types of explanations, summary-level explanations closely resemble real-life explanations.", "The strength of ESCOFILT lies in the fact that it unifies representation and explanation.", "In other words, extractive summaries both represent and explain the items and users.", "Our model uniquely integrates BERT, K -Means embedding clustering, and multilayer perceptron to learn sentence embeddings, representation-explanations, and user-item interactions, respectively.", "We argue that our approach enhances both rating prediction accuracy and user/item explainability.", "Our experiments illustrate that ESCOFILT's prediction accuracy is better than the other state-of-the-art recommender models.", "Furthermore, we propose a comprehensive set of criteria that assesses the real-life explainability of explanations.", "Our explainability study demonstrates the superiority of and preference for summary-level explanations over other explanation types.", "Collaborative filtering (CF) approaches are the most dominant and outstanding models in recommender systems literature.", "CF mainly focuses on learning accurate representations of users and items, denoting user preferences and item characteristics, respectively (Chen et al., 2018; Tay et al., 2018).", "The earliest CF models learned such representations based on user-given numeric ratings, but employing them is an oversimplification of user preferences and item characteristics (Koren et al., 2009; Musto et al., 2017).", "In this regard, review texts have been utilized to alleviate this issue.", "Reviews Received by the Journaling Bible' Item", "Review-Level: I brought this as I wanted a separate Bible to do Bible journaling.", "It is very beautiful and has many images that can be coloured.", "The pages are similar to Bible paper and cream in colour.", "Overall a wonderful Bible to do journaling and meditate God's Word.", "Word-Level: I brought this as I wanted a separate Bible to do Bible journaling.", "It is very beautiful and has many images that can be coloured.", "The pages are similar to Bible paper and cream in colour.", "Overall a wonderful Bible to do journaling and meditate God's Word.", "Summary-Level: I was not expecting this Bible to be so beautiful when I pre-ordered it 5 months ago, but it arrived in the mail today and it is just gorgeous!", "This removes that concern through some beautifully done artwork and lettering.", "The pages are similar to Bible paper and cream in colour.", "Overall a wonderful Bible to do journaling and meditate God's Word.", "The primary benefit of using reviews as the source of features is that they can cover the inherently multi-faceted nature of user opinions.", "Users can explain their rationales for the ratings they give to items.", "Thus, reviews contain a large quantity of rich latent information that cannot be otherwise acquired solely from ratings (Chen et al., 2018).", "Still, a typical limitation exists for most review-based recommender systems recently; the intrinsic black-box nature of neural networks (NN) makes the explainability behind predictions obscure (Ribeiro et al., 2016; Wang et al., 2018b).", "The intricate architecture of hidden layers has opaqued the decision-making processes of neural models (Peake and Wang, 2018).", "Providing explanations is essential as they could help persuade users to develop further trust in a recommender system and make eventual purchasing decisions (Peake and Wang, 2018; Ribeiro et al., 2016; Zhang et al., 2014).", "In light of this, current research efforts have attempted to improve the explainability aspect of recommender systems.", "Common types of explanations include review-level and word-level.", "In a review-level explanation, the attention mechanism is applied to measure every review's contribution to the item (or user) embedding (Chen et al., 2018; Feng and Zeng, 2019).", "High-scoring reviews are then selected to serve as explanations.", "On the other hand, in a word-level or token-level explanation, informative words in a local window or textual block are selected together (Liu et al., 2019a; Pugoy and Kao, 2020; Seo et al., 2017).", "Similar to the first mechanism, top words are chosen due to their high attention weights.", "Evidently, review-level and word-level explanations are side-effects of applying the attention mechanism to reviews and words.", "These have been integral and beneficial in formulating better user and item representations.", "However, we contend that both types of explanations may not completely resemble real-life explanations.", "In logic, an explanation is a set of intelligible statements usually constructed to describe and clarify the causes, context, and consequences of objects, events, or phenomena under examination (Drake, 2018).", "Based on our example in Table 1, the review-level explanation is exactly the same as the second item review, assuming that it has the higher attention weight.", "Due to this, it also inadvertently disregards other possibly useful sentences from other reviews with lower attention scores.", "Furthermore, even though the word-level explanation contains informative words, it may not be practical in an actual recommendation scenario since it typically appears as fragments.", "Word-level explanations may not be intelligible enough due to humans' natural bias toward sentences, which are defined to express complete thoughts (Andersen, 2014).", "Therefore, in this paper, we propose the first e xtractive s ummarization-based co llaborative filt ering model, ESCOFILT.", "For every item and user, our novel model generates extractive summaries that bear more resemblance to real-life explanations, as seen in Table 1's last row.", "Unlike a review-level explanation, a summary-level explanation (which we also call extractive summary , representative summary , and representation-explanation in different sections of this paper) is composed of informative statements gathered from different reviews.", "As opposed to a word-level explanation, an ESCOFILT-produced explanation is more comprehensible as it can convey complete thoughts.", "It should be noted that our model performs extractive summarization in an unsupervised manner since expecting ground-truth summaries for all items and users in a large dataset is unrealistic.", "The strength of ESCOFILT lies in the fact that it uniquely unifies representation and explanation.", "In other words, an extractive summary both represents and explains a particular item (or user).", "We argue that our approach enhances both rating prediction accuracy and user/item explainability, which are later validated by our experiments and explainability study.", "These are the main contributions of our paper:", "To the best of our knowledge, we pioneer the first extractive summarization-based CF framework.", "Our proposed model uniquely integrates BERT, K -Means embedding clustering, and multilayer perceptron (MLP) to respectively learn sentence embeddings, extractive representation-explanations, and user-item interactions.", "To the extent of our knowledge, ESCOFILT is one of the first recommender models that employ BERT as a review feature extractor.", "We also propose a comprehensive set of criteria that assesses the explainability of explanation texts in real life.", "Our experiments illustrate that the rating prediction accuracy of ESCOFILT is better than the other state-of-the-art models.", "Moreover, our explainability study shows that summary-level explanations are superior and more preferred than the other types of explanations.", "Developing a CF model involves two crucial steps, i.e., learning user and item representations and modeling user-item interactions based on those representations (He et al., 2018).", "One of the foundational works in utilizing NN for CF is neural collaborative filtering or NCF (He et al., 2017).", "Originally implemented for implicit feedback data-driven CF, NCF learns non-linear interactions between users and items by employing MLP layers as its interaction function.", "DeepCoNN is the first deep learning-based model representing users and items from reviews in a coordinated manner (Zheng et al., 2017).", "The model consists of two parallel networks powered by convolutional neural networks (CNN).", "One network learns user behavior by examining all reviews he has written, and the other network models item properties by exploring all reviews it has received.", "A shared layer connects these two networks, and factorization machines capture user-item interactions.", "Another notable model is NARRE, which shares several similarities with DeepCoNN.", "NARRE is also composed of two parallel CNN-based networks for user and item modeling (Chen et al., 2018).", "For the first time, this model incorporates the review-level attention mechanism that determines each review's usefulness or contribution based on attention weights.", "As a side-effect, this also leads to review-level explanations; reviews with the highest attention scores are presented as explanations.", "These weights are then integrated into the representations of users and items to enhance embedding quality and prediction accuracy.", "Other related studies include D-Attn (Seo et al., 2017), MPCN (Tay et al., 2018) DAML (Liu et al., 2019a), and HUITA (Wu et al., 2019).", "These all employ different types of attention mechanisms to distinguish informative parts of a given data sample, resulting in simultaneous accuracy and explainability improvements.", "D-Attn integrates global and local attention to score each word to determine its relevance in a review text.", "MPCN is similar to NARRE, but the former relies solely on attention mechanisms without any need for convolutional layers.", "DAML utilizes CNN's local and mutual attention to learn review features, and HUITA incorporates a hierarchical, three-tier attention network.", "Most of these aforementioned models take advantage of CNNs as automatic review feature extractors.", "Coupling them with mainstream word embeddings leads to the formulation of user and item representations.", "However, such approaches fail to consider global context and word frequency information.", "The two said factors are crucial as they can affect recommendation performance (Pilehvar and Camacho-Collados, 2019; Wang et al., 2018a).", "To deal with such dilemmas, NCEM (Feng and Zeng, 2019) and BENEFICT (Pugoy and Kao, 2020) use a pre-trained BERT model to obtain review features.", "BERT's advantage lies in its full retention of global context and word frequency information (Feng and Zeng, 2019).", "For explainability, NCEM similarly adopts NARRE's review-level attention.", "On the contrary, BENEFICT utilizes BERT's self-attention weights in conjunction with a solution to the maximum subarray problem (MSP).", "BENEFICT's approach produces an explanation based on a subarray of contiguous tokens with the largest possible sum of self-attention weights.", "In summary, there appears to be a trend; tackling explainability improves prediction and recommendation performance consequentially.", "While most recommender models address this via attention mechanisms, our proposed model solves this by unifying representation and explanation in the form of extractive summaries.", "As evidenced in the succeeding sections of this paper, we argue that our approach can further enhance CF's accuracy and explainability.", "ESCOFILT, whose architecture is illustrated in Figure 1, has two parallel components that learn summarization-based user and item representations.", "From Sections 3.2 to 3.3, we will only discuss the item modeling process as it is nearly identical to user modeling, with their inputs as the only difference.", "The training dataset consists of N tuples, with the latter denoting the size of the dataset.", "Each tuple follows this form: ( u, i, r ui , v ui ) where r ui and v ui respectively refer to the ground-truth rating and review accorded by user u to item i .", "Moreover, let V u = { v u 1 , v u 2 , ..., v uj } be the set of all j reviews written by user u .", "Similarly, let V i = { v 1 i , v 2 i , ..., v ki } be the set of all k reviews received by item i .", "Both V u and V i are obtained from scanning itself.", "The input of ESCOFILT is a user-item pair ( u, i ) from each tuple in .", "We particularly feed V u and V i to the model as they initially represent u and i .", "The output is the predicted rating r ui R that user u may give to item i .", "Thus, the rating prediction task R can be expressed as: R ( u, i ) = ( V u , V i ) r ui (1) Its corresponding objective function, the mean squared error (MSE), is given below: MSE = 1 | | (cid:88) u,i ( r ui r ui ) 2 (2) 3.2 Sentence Extraction and BERT Encoding First, the reviews in V i are concatenated together to form a single document.", "A sentence segmentation component called Sentencizer (by spaCy) is utilized to split this document into individual sentences (Gupta and Nishu, 2020).", "The set of all sentences in V i is now given by S i = { s i 1 , s i 2 , ..., s ig } where g refers to the total number of sentences.", "Afterward, S i is fed to a pre-trained BERTLARGE model.", "It should be noted that we opt not to use [CLS] representations as these may not necessarily provide the best sentence embeddings (Miller, 2019).", "In this regard, we tap BERT's penultimate encoder layer to obtain the contextualized word embeddings.", "The word embeddings of each sentence in S i are stored in S i R g w 1024 ; w pertains to the amount of words in a sentence, and 1024 is the embedding size of BERT.", "Then, we average every sentence's word embeddings in S i to produce the set of sentence embeddings S (cid:48) i = { s (cid:48) i 1 , s (cid:48) i 2 , ..., s (cid:48) ig } , with S (cid:48) i R g 1024 .", "K -Means clustering is next performed to partition the sentence embeddings in S (cid:48) i into K clusters.", "Its objective is to minimize the intra-cluster sum of the distances from each sentence to its nearest centroid, given by the following equation (Xia et al., 2020): J i = K (cid:88) x =1 (cid:88) s (cid:48) iy C x || s (cid:48) iy c x || 2 (3) where c x is the centroid of cluster C x that is closest to the sentence embedding s (cid:48) iy .", "The objective function J i is optimized for item i by running the assignment and update steps until the cluster centroids stabilize.", "The assignment step assigns each sentence to a cluster based on the shortest sentence embedding-cluster centroid distance, provided by the formula below: d ( s (cid:48) iy ) = argmin x =1 ,...,K {|| s (cid:48) iy c x || 2 } (4) where d is a function that obtains the cluster closest to s (cid:48) iy .", "Furthermore, the update step recomputes the cluster centroids based on new assignments from the previous step.", "This is defined as: c x = 1 | C x | g (cid:88) y =1 { s (cid:48) iy | d ( s (cid:48) iy ) = x } (5) where | C x | refers to the number of sentences that cluster C x contains.", "By introducing clustering, redundant and related sentences are grouped in the same cluster.", "Concerning this, K is derived using this equation: K = i g (6) where i pertains to the item summary ratio, i.e., the percentage of sentences that comprise an item's extractive summary.", "This subsequently implies that K denotes the actual number of sentences in the summary.", "Sentences closest to each cluster centroid are selected and combined to form the item's representation-explanation.", "This is mathematically expressed as: e ( C x ) = argmin y =1 ,...,g {|| s (cid:48) iy c x || 2 } ItemRX i = 1 KK (cid:88) x =1 s (cid:48) i,e ( C x ) (7) where e is a function that returns the nearest sentence to the centroid c x of cluster C x , and ItemRX i R 1 1024 is the representation-explanation embedding of item i .", "Inspired by NARRE (Chen et al., 2018), we also draw some principles from the traditional latent factor model by incorporating rating-based hidden vectors that depict users and items to a certain extent.", "These are represented by UserIV and ItemIV , both in R 1 m where m is the dimension of the latent vectors.", "Such vectors are fused with their respective representation-explanation embeddings.", "This is facilitated by these fusion levels, illustrated by the following formulas: f u = ( UserRX u W u + b u ) + UserIV u f i = ( ItemRX i W i + b i ) + ItemIV i f ui = [ f u , f i ] (8) where f u and f i pertain to the preliminary fusion layers and both are in R 1 m ; W u and W i are weight matrices in R 1024 m ; b u and b i refer to bias vectors; and f ui R 1 2 m denotes the initial user-item interactions from the third fusion layer and is later fed to the MLP.", "The MLP is necessary to model the CF effect, i.e., to learn meaningful non-linear interactions between users and items.", "An MLP with multiple hidden layers typically implies a higher degree of nonlinearity and flexibility.", "Similar to the strategy of He et al. (2017), ESCOFILT adopts an MLP with a tower pattern; the bottom layer is the widest while every succeeding top layer has fewer neurons.", "A tower structure enables the MLP to learn more abstractive data features.", "Specifically, we halve the size of hidden units for each successive higher layer.", "ESCOFILT's MLP component is defined as follows: h 1 = ReLU ( f ui W 1 + b 1 ) h L = ReLU ( h L 1 WL + b L ) (9) Dataset #Reviews #Users #Items Automotive 20,473 2,928 1,835 Digital Music 64,706 5,541 3,568 Instant Video 37,126 5,130 1,685 Patio, Lawn, & Garden 13,272 1,686 962 Table 2: Statistics of the datasets utilized in our study.", "where h L represents the L -th MLP layer, and WL and b L pertain to the L -th layer's weight matrix and bias vector, respectively.", "As far as the MLP's activation function is concerned, we select the rec-tified linear unit (ReLU), which yields better performance than other activation functions (He et al., 2017).", "Finally, the MLP's output is fed to one more linear layer to produce the predicted rating: r ui = h L WL +1 + b L +1 (10) 4 Empirical Evaluation 4.1 Research Questions In this section, we detail our experimental setup designed to answer the following research questions (RQs): RQ1: Does ESCOFILT outperform the other state-of-the-art recommender baselines?", "RQ2: Is embedding clustering effective?", "RQ3: Can our model produce explanations acceptable to humans in real life?", "Table 2 summarizes the four public datasets 1 that we utilized in our study.", "These datasets are Amazon 5-core, wherein users and items are guaranteed to have at least five reviews each (McAuley et al., 2015; He and McAuley, 2016).", "The ratings across all datasets are in the range of [1, 5].", "We split each dataset into training (80%), validation (10%), and test (10%) sets.", "Next, to validate the effectiveness of ESCOFILT, we compared its prediction performance against four state-of-the-art baselines: BENEFICT (Pugoy and Kao, 2020): This re-cent recommender model uniquely integrates BERT, MSP, and MLP to learn representations, explanations, and interactions.", "DeepCoNN (Zheng et al., 2017): This is the first deep collaborative neural network model that is based on two parallel CNNs to jointly learn user and item features.", "MPCN (Tay et al., 2018): Akin to NARRE, this CNN-less model employs a new type of dual attention for identifying relevant reviews.", "NARRE (Chen et al., 2018): Similar to DeepCoNN, it is a neural attentional regression model that integrates two parallel CNNs and the review-level attention mechanism.", "All these recommender models employed the same dataset split.", "We then computed the root mean square error (RMSE) on the test dataset ( ), as indicated by the formula below.", "RMSE is a widely used metric for evaluating a model's rating prediction accuracy (Steck, 2013).", "For ESCOFILT, we mainly based its summarization component on BERT Extractive Summarizer 2 by Miller (2019).", "We also utilized the pre-trained BERTLARGE model afforded by the Transformers library of HuggingFace 3 .", "In our implementation 4 , the following hyperparameters were fixed: Learning rate: 0.006 Quantity of MLP layers: 4 Item summary ratio ( i ): 0.4 User summary ratio ( u ): 0.4 On the other hand, we operated an exhaustive grid search over these hyperparameters: Number of epochs: [1, 30] Latent vector dimension ( m ): { 32, 128, 220 } Due to its architectural similarity to ESCOFILT, we reimplemented BENEFICT by augmenting it with the pre-trained BERTLARGE model and adopting our model's fusion and latent vector dimension strategies.", "For DeepCoNN, MPCN, and NARRE, we employed the extensible NRRec framework 5 and retained the other hyperparameters reported in the framework (Liu et al., 2019b).", "For the four baselines, we also performed an exhaustive grid search over the following:", "All models, including ESCOFILT, used the same optimizer, Adam, which leverages the power of adaptive learning rates during training (Kingma and Ba, 2014).", "This makes the selection of a learning rate less cumbersome, leading to faster convergence (Chen et al., 2018).", "Without special mention, the models shared the same random seed, batch size (128), and dropout rate (0.5).", "We selected the model configuration with the lowest RMSE on the validation set.", "We ran our experiments on NVIDIA GeForce RTX 2080 Ti.", "The overall performances of our model and the other baselines are summarized in Table", "3. It is essential to remark that although utilizing information derived from reviews is beneficial, a model's performance can vary contingent on how the said information is considered.", "These are our general findings: First, our proposed model consistently outperforms all baselines across all datasets.", "This ascertains the effectiveness of ESCOFILT and clearly answers RQ1.", "Moreover, this validates our case that coupling BERT (a superior review feature extractor) with embedding clustering enables user and item representations to have finer granularity and fewer redundancies.", "Second, receiving the two lowest average RMSE values, BERT-based models (ESCOFILT and BENEFICT) have generally better prediction accuracies than the rest of the mostly CNN-powered baselines.", "This particular observation verifies the necessity of integrating BERT in a CF architecture.", "Unlike its mainstream counterparts, BERT produces more semantically meaningful embeddings that keep essential elements such as global context and word frequency information.", "This section further discusses the efficacy of K Means embedding clustering, instrumental in producing user and item representative summaries.", "Concerning this, we prepared three variants of our model.", "First is ESCOFILT-N, which does not utilize any embedding clustering.", "Instead, it relies on Model Automotive Digital Music Instant Video Patio, Lawn, & Garden Average BENEFICT 0.9023 0.8910 0.9746 0.9352 0.9258 DeepCoNN 0.9076 0.8904 0.9778 0.9316 0.9269 MPCN 0.9107 0.9298 0.9976 0.9362 0.9436 NARRE 0.9144 0.8915 0.9758 0.9539 0.9339 ESCOFILT 0.8968 0.8831 0.9742 0.9298 0.9210 Table 3: Performance comparison of the recommender models.", "traditional embeddings that are neither pre-trained nor review-based.", "They are randomly initialized yet optimized during training.", "Another variant is ESCOFILT-I, wherein only item reviews undergo embedding clustering while the user component is based on traditional embeddings.", "ESCOFILT-U also operates the same way; the difference is that only user reviews are processed by embedding clustering.", "Based on Figure 2, having the lowest validation RMSE values, the default ESCOFILT configuration is the best across the datasets, while the worst variant is ESCOFILT-N.", "This gives credence to embedding clustering's effectiveness and addresses RQ2; it can simultaneously capture user preferences and item characteristics, resulting in precise representations and accurate rating prediction.", "There appears to be a trend as well: the second-best and the third-best variants are ESCOFILT-I and ESCOFILT-U, respectively.", "In some instances, ESCOFILT-I seems to be on par with the default ESCOFILT variant.", "This implies that items stand to benefit more than users from embedding clustering.", "One possible explanation is that each item normally receives a far greater quantity of reviews than each user actually writes, translating to more possibly extractable information and features.", "Hence, item reviews have a more significant influence than user reviews in determining ratings.", "Still, this does not immediately suggest that user embedding clustering is not helpful.", "It needs to be integrated first with item embedding clustering via the MLP to discover relevant user-item interactions, leading to our original model's performance.", "The assessment of explanations in existing recommender systems literature is generally limited to specific case studies.", "Most of these relied on simple qualitative analysis of attention weights and high-scoring reviews on selected samples (Liu et al., 2019a; Seo et al., 2017; Wu et al., 2019).", "The assessment criterion provided in the NARRE and BENEFICT papers went a little further by asking human raters to score each explanation's helpfulness or usefulness on a given Likert scale (Chen et al., 2018; Pugoy and Kao, 2020).", "Nevertheless, to the best of our knowledge, there does not appear to be a comprehensive set of criteria that assesses the real-life explainability of explanations.", "We contend that it is increasingly necessary to measure how people actually perceive explanation texts generated by recommender models; after all, these texts aim to explain entities in real life.", "Hence, we Model Coherence Completeness Lack of Alternatives Novelty Perceived Truth Quality Visualization BENEFICT 3.52 3.82 3.75 3.58 3.87 3.65 3.65 NARRE 3.68 3.82 3.82 3.72 3.75 3.72 3.92 ESCOFILT 3.92 3.87 3.73 3.75 3.92 3.72 3.78 Table 4: Comparison of the three explanation types based on the real-life explainability criteria (pointwise evalua-tion).", "propose the following explainability criteria, which are inspired by Zemla et al. (2017):", "1. Coherence: Parts of the explanation fit together coherently.", "2. Completeness: There are no gaps in the ex-planation.", "3. Lack of Alternatives: There are probably less to no reasonable alternative explanations.", "4. Novelty: I learned something new from the explanation.", "5. Perceived Truth: I believe this explanation to be true.", "6. Quality: This is a good explanation.", "7. Visualization: It is easy to visualize what the explanation is saying. 5.2 Human Assessment of Explanations We generated a total of 90 item explanations, 30 each from BENEFICT (token-level), NARRE (review-level), and ESCOFILT (summary-level).", "For pointwise evaluation, we asked two human judges to assess the explanations based on our proposed real-life explainability criteria on a five-point Likert scale.", "For listwise evaluation, we instructed them to rank the three explanation types for every text according to helpfulness.", "We further examined these results by determining the strength of agreement between the two judges, using Cohen's Kappa coefficient ( ) wherein -1 indicates a less than chance agreement, 0 refers to a random agreement, and 1 denotes a perfect agreement (Borromeo and Toyama, 2015; Landis and Koch, 1977).", "Table 4 summarizes the results of the human judges' pointwise evaluation.", "For five out of seven criteria, ESCOFILT-derived explanations have the highest explainability scores.", "Specifically, summary-level explanations are most coherent, most complete, most novel, and most truthful.", "ESCOFILT's strongest aspect is its perceived truth, obtaining a mean rating of 3.92 and = 0 .", "28 that indicates a fair inter-judge agreement.", "Interestingly, both ESCOFILT and NARRE have the best quality, with the same mean rating of 3.72.", "The Kappa coefficient is 0.11, implying that the judges agree with each other to a certain extent.", "Considering that a review-level explanation is simply the highest weighted review, our model-generated explanations are assessed on par with the former.", "Furthermore, review-level explanations have the highest explainability scores in two other criteria, i.e., lack of alternatives and visualization.", "NARRE's strongest aspect is that its explanations are easiest to visualize, having a mean rating of 3.92 and = 0 .", "27 that denotes a fair inter-judge agreement.", "Lastly, Figure 3 shows the results of the human judges' listwise evaluation.", "Our model produces the most helpful explanations; such explanations are ranked first for almost 83% of the items.", "These are followed far behind by NARRE's explanations, ranked first for nearly 17% of the items.", "None of BENEFICT's explanations are ranked first.", "With = 0 .", "45 for ranking consistency, there is a moderate agreement between the judges.", "In summary, these results clearly illustrate the superiority of summary-level explanations in real life that can present necessary guidance to users in making future purchasing decisions, thereby satisfying RQ3.", "In this study, unifying representations and explanations, in the form of extractive summaries, have further enhanced collaborative filtering accuracy and explainability.", "We have successfully developed a model that uniquely integrates BERT, embedding clustering, and MLP.", "Our experiments on various datasets verify ESCOFILT's predictive capability, and the human judges' assessments validate its explainability in real life.", "In the future, we shall consider expanding our model's explainability capability by possibly incorporating other NLP principles such as abstractive summarization and natural language generation.", "This work was funded in part by Qualcomm through a Taiwan University Research Collaboration Project and also in part by the Ministry of Science and Technology, Taiwan, under NCKU B109-K027D and MOST 109-2221-E-006-173 grants, respectively." ]
[ "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "other" ]
[ "Sentence compression methods based on LSTM can generate fluent compressed sentences.", "However, the performance of these methods is significantly degraded when compressing long sentences since it does not explicitly handle syntactic features.", "To solve this problem, we propose a higher-order syntactic attention network (HiSAN) that can handle higher-order dependency features as an attention distribution on LSTM hidden states.", "Furthermore, to avoid the influence of incorrect parse results, we train HiSAN by maximizing the probability of a correct output together with the attention distribution.", "Experiments on the Google sentence compression dataset show that our method achieved the best performance in terms of F 1 as well as ROUGE-1,2 and L scores, 83.2, 82.9, 75.8 and 82.7, respectively.", "In subjective evaluations, HiSAN outperformed baseline methods in both readability and informativeness.", "Sentence compression is the task of compressing long sentences into short and concise ones by deleting words.", "To generate compressed sentences that are grammatical, many researchers (Jing, 2000; Knight and Marcu, 2000; Berg-Kirkpatrick et al., 2011; Filippova and Altun, 2013) have adopted tree trimming methods.", "Even though Filippova and Altun (2013) reported the best results on this task, automatic parse errors greatly degrade the performances of these tree trimming methods.", "1 We used an LSTM-based sentence compression method (Filippova et al., 2015) in the evaluation setting as described in Section 4.1.", "Recently, Filippova et al. (2015) proposed an LSTM sequence-to-sequence (Seq2Seq) based sentence compression method that can generate fluent sentences without utilizing any syntactic features.", "Therefore, Seq2Seq based sentence compression is a promising alternative to tree trimming.", "However, as reported for a machine translation task (Cho et al., 2014; Pouget-Abadie et al., 2014; Koehn and Knowles, 2017), the longer the input sentences are, the worse the Seq2Seq performances become.", "We also observed this problem in the sentence compression task.", "As shown in Figure 1, the performance of Seq2Seq is degraded when compressing long sentences.", "In particular, the performance significantly falls if sentence length exceeds 26 words.", "This is an important problem, because sentences longer than the average sentence length (=28 words) accounts for 42% of the Google sentence compression dataset.", "2 We treat the maximum distance from root node to the leaf node as dependency tree depth.", "dependency trees, which have long distances from root node to words at leaf nodes.", "Therefore, improving compression performance for sentences with such deep dependency trees can help to compress longer sentences.", "To deal with sentences that have deep dependency trees, we focus on the chains of dependency relationships.", "Figure 3 shows an example of a compressed sentence with its dependency tree.", "The topic of this sentence is import agreement related to electricity .", "Thus, to generate informative compression, the compressed sentence must retain the country name .", "In this example, the compressed sentence should keep the phrase from Kyrgyz Republic and Tajikistan.", "Thus, the compressed sentence must also keep the dependency chain import, resolution and signed because the phrase is a child of this chain.", "By considering such higher-order dependency chains, the system can implement informative compression.", "As can be seen from the example in Figure 3, tracking a higher-order dependency chain for each word would help to compress long sentences.", "This paper refers to such dependency relationships by the expression d -length dependency chains.", "To handle a d -length dependency chain for sentence compression with LSTM, we propose the higher-order syntactic attention network (HiSAN).", "HiSAN computes the deletion probability for a given word based on the d -length dependency chain starting from the word.", "The d -length dependency chain is represented as an attention distribution, learned using automatic parse trees.", "To alleviate the influence of parse errors in automatic parse trees, we learn the attention distribution together with deletion probability.", "Evaluation results on the Google sentence compression dataset (Filippova and Altun, 2013) show that HiSAN achieved the best F 1 , ROUGE-1,2 and L scores 83.2, 82.9, 75.8 and 82.7, respectively.", "In particular, HiSAN attained remarkable compression performance with long sentences.", "In human evaluations, HiSAN also outperformed the baseline methods.", "Sentence compression can be regarded as a tagging task, where given a sequence of input tokens x = ( x 0 , ..., x n ) , a system assigns output label y t , which is one of three types of specific labels (keep,delete, orend of sentence) to each input token x t ( 1 t n ).", "The LSTM-based approaches for sentence compression are mostly based on the bi-LSTM based tagging method (Tagger) (Klerke et al., 2016; Wang et al., 2017; Chen and Pan, 2017) or Seq2Seq (Filippova et al., 2015; Tran et al., 2016).", "Tagger independently predicts labels in a point estimation manner, whereas Seq2Seq predicts labels by considering previously predicted labels.", "Since Seq2Seq is more expressive than Tagger, we built HiSAN on the baseline Seq2Seq model.", "Our baseline Seq2Seq is a version of Filippova et al. (2015) extended through the addition of bi-LSTM, an input feeding approach (Vinyals et al., 2015; Luong et al., 2015), and a monotonic hard attention method (Yao and Zweig, 2015; Tran et al., 2016).", "As described in the evaluations section, this baseline achieved comparable or even better scores than the state-of-the-art scores reported in Filippova et al. (2015).", "The baseline Seq2Seq model consists of embedding, encoder, decoder, and output layers.", "In the embedding layer, the input tokens x are converted to the embeddings e .", "As reported in Wang et al. (2017), syntactic features are important for learning a generalizable embedding for sentence compression.", "Following their results, we also introduce syntactic features into the embedding layer.", "Specifically, we combine the surface token embedding w i , POS embedding p i , and dependency relation label embedding r i into a single vector as follows: e i = [ w i , p i , r i ] , (1) where [] represents vector concatenation, and e i is an embedding of token x i .", "The encoder layer converts the embedding e into a sequence of hidden states h = ( h 0 , ..., h n ) 1717 using a stacked bidirectional-LSTM (bi-LSTM) as follows: h i = [ h i , h i ] (2) h i = LST M ( h i 1 , e i ) (3) h i = LST M ( h i 1 , e i ) , (4) where LST M and LST M represent forward and backward LSTM, respectively.", "The final state of the backward LSTM h 0 is inherited by the decoder as its initial state.", "In the decoder layer, the concatenation of a 3-bit one-hot vector which is determined by previously predicted label y t 1 , previous final hidden state d t 1 (explained later), and the input embedding of x t , is encoded into the decoder hidden state s t using stacked forward LSTMs.", "Contrary to the original softmax attention method, we can deterministically focus on one encoder hidden state h t (Yao and Zweig, 2015) to predict y t in the sentence compression task (Tran et al., 2016).", "3 In the output layer, label probability is calculated as follows: P ( y t | y <t , x ) = softmax ( W o d t ) y t , (5) d t = [ h t , s t ] (6) where W o is a weight matrix of the softmax layer and y t is a binary vector where the y t -th element is set to 1 and the other elements to 0 .", "The key component of HiSAN is its attention module.", "Unlike the baseline Seq2Seq, HiSAN employs a packed d -length dependency chain as distributions in the attention module.", "Section 3.1 explains the packed d -length dependency chain.", "Section 3.2 describes the network structure of our attention module, and Section 3.3 explains the learning method of HiSAN.", "The probability for a packed d -length dependency chain is obtained from a dependency graph, which is an edge-factored dependency score matrix (Hashimoto and Tsuruoka, 2017; Zhang et al.,", "3 This is because the output length is the same as the input length, and each x t can be assigned to each y t in a one-to-one correspondence.", "2017).", "First, we explain the dependency graph.", "Figure 4", "(a) shows an example of the dependency graph.", "HiSAN represents a dependency graph as an attention distribution generated by the attention module.", "A probability for each dependency edge is obtained from the attention distribution.", "Figure 4", "(b) shows an example of the packed d -length dependency chain.", "With our recursive attention module, the probability for a packed d length dependency chain is computed as the sum of probabilities for each path yielded by recursively tracking from a word to its d -th ancestor.", "The probability for each path is calculated as the product of the probabilities of tracked edges.", "The probability for the chain can represent several d length dependency chains compactly, and so alleviates the influence of incorrect parse results.", "This is the advantage of using dependency graphs.", "Figure 5 shows the prediction process of HiSAN.", "In this figure, HiSAN predicts output label y 7 from the input sentence.", "The prediction process of HiSAN is as follows.", "1. Parent Attention module calculates P parent ( x j | x t , x ) , the probability of x j being the parent of x t , by using h j and h t .", "This probability is calculated for all pairs of x j , x t .", "The arc in Figure 5 shows the most probable dependency parent for each child token.", "2. Recursive Attention module calculates d,t,j , the probability of x j being the d -th order parent ( d denotes the chain length) of x t , by recursively using P parent ( x j | x t , x ) .", "d,t,j is also treated as an attention distribution, and used to calculate d,t , the weighted sum of h for each length d .", "For example, a 3-length dependency chain of word x 7 with highest probability is x 6 x 5 x 2 .", "The encoder hidden states h 6 , h 5 and h 2 , which correspond to the dependency chain, are weighted by calculated parent probabilities 1 , 7 , 6 , 2 , 7 , 5 and 3 , 7 , 2 , respectively, and then fed to the selective attention module.", "3. Selective Attention module calculates weight d,t from its length, d d , for each d,t .", "d represents a group of chain lengths.", "d,t is calculated by encoder and decoder hidden states.", "Each d,t d,t is summed to t , the output of the selective attention module.", "4. Finally, the calculated t is concatenated and input to the output layer.", "Details of each module are explained in the following subsection.", "Zhang et al. (2017) formalized dependency parsing as the problem of independently selecting the parent of each word in a sentence.", "They produced a distribution over possible parents for each child word by using the attention layer on bi-LSTM hidden layers.", "In a dependency tree, a parent has more than one child.", "Under this constraint, dependency parsing is represented as follows.", "Given sentence S = ( x 0 , x 1 , ..., x n ) , the parent of x j is selected from S \\ x i for each token S \\ x 0 .", "Note that x 0 denotes the root node.", "The probability of token x j being the parent of token x t in sentence x is calculated as follows: P parent ( x j | x t , x )= softmax ( g ( h j , h t )) x j , (7) g ( h j , h t )= v Ta tanh ( U a h j + W a h t ) , (8) where v a , U a and W a are weight matrices of g .", "Different from the attention based dependency parser, P parent ( x j | x t , x ) is jointly learned with output label probability P ( y | x ) in the training phase.", "Training details are given in Section 3.3.", "The recursive attention module recursively calculates d,t,j , the probability of x j being the d -th order parent of x t , as follows:", "", "(9) Furthermore, in a dependency parse tree, root should not have any parent, and a token should not depend on itself.", "In order to satisfy these rules, we impose the following constraints on 1 ,t,j : 1 ,t,j = 1 ( t = 0 j = 0) 0 ( t = 0 j > 0) 0 ( t = 0 t = j ) (10) The 1st and 2nd lines of Eq.", "(10) represent the case that the parent of root is also root .", "These constraints imply that root does not have any parent.", "The 3rd line of Eq.", "(10) prevents a token from depending on itself.", "Because the 1st line of Eq.", "(9) is similar to the definition of matrix multiplication, Eq.", "(9) can be efficiently computed on a CPU and GPU 4 .", "4 In training, HiSAN with 1and 3-length dependency chains took 25 and 26 minutes, respectively, per epoch on an Intel Xeon E5-2697 v3 2.60 GHz.", "By recursively using the single attention distribution, it is no longer necessary to prepare additional attention distributions for each order when computing the probability of higher order parents.", "Furthermore, since it is not necessary to learn multiple attention distributions, it becomes unnecessary to use hyper parameters for adjusting the weight of each distribution in training.", "Finally, this method can avoid the problem of sparse higher-order dependency relations in the training dataset.", "The above calculated d,t,j is used to weight the bi-LSTM hidden layer h as follows: d,t = n k = j d,t,j h j .", "To select suitable dependency orders of the input sentence, the selective attention module weights and sums the hidden states d,t to t by using weighting parameter d,t , according to the current context as follows:", "d,t = softmax ( W c c t ) d , (12) t = d { 0 , d } d,t d,t , (13)", "where W c is the weight matrix of the softmax layer, d is a group of chain lengths, c t is a vector representing the current context, 0 ,t is a zero-vector, and 0 ,t indicates the weight when the method does not use the dependency features.", "Context vector c t is calculated as c t = [ h 0 , h n , s t ] using the current decoder hidden state s t .", "The calculated t is concatenated and input to the output layer.", "In detail, d t in Eq.", "(5) is replaced by concatenated vector d t = [ h t , t , s t ] ; furthermore, instead of d t , d t is also fed to the input of the decoder LSTM at t + 1 .", "To alleviate the influence of parse errors, we jointly update the 1st-order attention distribution 1 ,t,k and label probability P ( y | x ) (Kamigaito et al., 2017).", "The 1st-order attention distribution is learned by dependency parse trees.", "If a t,j = 1 is an edge between parent word w j and child w t on a dependency tree ( a t,j = 0 denotes that w j is not a parent of w t .), the objective function of our method can be defined as: logP ( y | x ) n j =1 n t =1 a t,j log 1 ,t,j , (14) where is a hyper-parameter that controls the importance of the output labels and parse trees in the training dataset.", "This evaluation used the Google sentence compression dataset (Filippova and Altun, 2013) 5 .", "This dataset contains information of compression labels, part-of-speech (POS) tags, dependency parents and dependency relation labels for each sentence.", "We used the first and last 1,000 sentences of comp-data.eval.json as our test and development datasets, respectively.", "Note that our test dataset is compatible wth that used in previous studies (Filippova et al., 2015; Tran et al., 2016; Klerke et al., 2016; Wang et al., 2017).", "In this paper, we trained the following baselines and HiSAN on all sentences of", "sent-comp.train*.json (total 200,000 sentences) 6 , 7 , 8 .", "In our experiments, we replaced rare words that appear fewer than 10 times in our training dataset with a special token UNK .", "After this filtering, the input vocabulary size was 23 , 168 .", "For a fair comparison of HiSAN, we used the input features described in Eq.", "(1) for the following baseline methods: 5 https://github.com/google-research-datasets/sentence-compression 6 Note that Filippova et al. (2015) used 2,000,000 sentences for training their method, but these datasets are not publicly available.", "7 We also demonstrate an experimental evaluation on a small training set (total 8,000 sentences), that was used in previous research.", "The results of this setting are listed in our supplemental material.", "8 Note that the large training dataset lacks periods at the end of compressed sentences.", "To unify the form of compressed sentences in small and large settings, we added periods to the end of compressed sentences in the large training dataset.", "Tagger: A method that regards sentence compression as a tagging task based on bi-LSTM (Klerke et al., 2016; Wang et al., 2017).", "Tagger+ILP: An extension of Tagger that integrates ILP (Integer Linear Programming)-based dependency tree trimming (Wang et al., 2017).", "We set their positive parameter to 0 .", "2 .", "Bi-LSTM: A method that regards sentence compression as a sequence-to-sequence translation task proposed by (Filippova et al., 2015).", "For a fair comparison, we replaced their one-directional LSTM with the more expressive bi-LSTM in the encoder part.", "The initial state of the decoder is set to the sum of the final states of the forward and backward LSTMs.", "Bi-LSTM-Dep: An extension of Bi-LSTM that exploits features obtained from a dependency tree (named LSTM-PAR-PRES in Filippova et al. (2015)).", "Following their work, we fed the word embedding and the predicted label of a dependency parent word to the current decoder input of Bi-LSTM .", "Attn: An extension of the softmax based attention method (Luong et al., 2015).", "We replaced h t in Eq.", "(6) with the weighted sum calculated by the commonly used concat attention (Luong et al., 2015).", "HiSAN-Dep: A variant of HiSAN that utilizes the pipeline approach.", "We fix 1 ,j,t to 1 .", "0 if x j is a parent of x t in the input dependency parse tree, 0 .", "0 otherwise.", "In this baseline, d = { 1 } was used.", "Following the previous work (Wang et al., 2017), the dimensions of the word embeddings, LSTM layers, and attention layer were set to 100.", "For the Tagger-style methods, the depth of the LSTM layer was set to 3, and for the Seq2Seq-style methods, the depth of the LSTM layer was set to", "2. In this setting, all methods have a total of six LSTM-layers.", "The dimensions of POS and the dependency-relation label embeddings were set to 40.", "All parameters were initialized by Glorot and Bengio (2010)'s method.", "For all methods, we applied Dropout (Srivastava et al., 2014) to the input of the LSTM layers.", "All dropout rates were set to 0.3.", "During training, the learning rate was tuned with Adam (Kingma and Ba, 2014).", "The initial learning rate was set to 0.001.", "The maximum number of training epochs was set to 30.", "The hyper-parameter was set to 1.0 in the supervised attention setting.", "All gradients were averaged in each mini-batch.", "The maximum mini-batch size was set to 16.", "The order of mini-batches was shuf-fled at the end of each training epoch.", "The clipping threshold of the gradient was set to 5.0.", "We selected trained models with early stopping based on maximizing per-sentence accuracy (i.e., how many compressions could be fully reproduced) of the development data set.", "To obtain a compressed sentence, we used greedy decoding, rather than beam decoding, as the latter attained no gain in the development dataset.", "All methods were written in C++ on Dynet (Neubig et al., 2017).", "In the automatic evaluation, we used token level F 1 -measure ( F 1 ) as well as recall of ROUGE-1, ROUGE-2 and ROUGE-L (Lin and Och, 2004) 9 as evaluation measures.", "We used C = system compression ratio gold compression ratio to evaluate how close the compression ratio of system outputs was to that of gold compressed sentences.", "The average compression ratio of the gold compression for input sentence was 39 .", "8 .", "We used micro-average for F 1 -measure and compression ratio 10 , and macro-average for ROUGE scores, respectively.", "To verify the benefits of our methods on long sentences, we additionally report scores on sentences longer than the average sentence length ( = 28 ) in the test set.", "The average compression ratio of the gold compression for longer input sentences was 31 .", "4 .", "All results are reported as the average scores of five trials.", "In each trial, different random choices were used to generate the initial values of the embeddings and the order of mini-batch processing.", "Table 1 shows the results.", "HiSANs outperformed the other methods.", "In particular, HiSAN ( d = { 1 , 2 , 4 } ) achieved the best score on F 1 , 9 We used the ROUGE-1.5.5 script with option -n 2 -m -d -a.", "10 We also report the macro-average of F 1 -measure and compression ratio in our supplemental material.", "11 Note that we used average of all metrics to decide the best score of the development dataset.", "The results are listed in our supplemental material.", "ROUGE, and C in all settings.", "The F 1 scores of HiSAN ( ALL ) were higher than the current state-of-the-art score of .82, reported by Filippova et al. (2015).", "The improvements in F 1 and ROUGE scores from the baselines methods in the LONG setting are larger than those in the ALL setting.", "From these results, we can conclude that d -length dependency chains are effective for sentence compression, especially in the case of longer than average sentences.", "HiSAN ( d = { 1 } ) outperformed HiSAN-Dep in F 1 scores in ALL and LONG settings.", "This result shows the effectiveness of joint learning the dependency parse tree and the output labels.", "In the human evaluation, we compared the baselines with our method, which achieved the highest F 1 score in the automatic evaluations.", "We used the first 100 sentences that were longer than the average sentence length ( = 28 ) in the test set for human evaluation.", "Similar to Filippova et al. (2015), the compressed sentence was rated by five raters who were asked to select a rating on a five-point Likert scale, ranging from one to five for readability ( Read ) and for informativeness ( Info ).", "We report the average of these scores from the five raters.", "To investigate the differences between the methods, we also compared the baseline meth-Read Info CR All Tagger 4.54 3.41 30.9 (100) Base 4.64 3.45 31.1 HiSAN ( d = { 1 , 2 , 4 } ) 4 .", "Table 2 shows the results.", "HiSAN ( d = { 1 , 2 , 4 } ) achieved better results than the baselines in terms of both readability and informativeness.", "The results agree with those obtained from the automatic evaluations.", "From the results on the sentences whose compressed sentences were different between Base and HiSAN ( d = { 1 , 2 , 4 } ), we can clearly observe the improvement attained by HiSAN ( d = { 1 , 2 , 4 } ) in informativeness.", "Table 3 shows examples of source sentences and their compressed variants output by baseline and HiSAN ( d = 1 , 2 , 4 ).", "For both examples, the compressed sentence output by Base is grammatically correct.", "However, the informativeness is inferior to that attained by HiSAN ( d = { 1 , 2 , 4 } ).", "The compressed sentence output by HiSAN-Dep in the second example lacks both readability and informativeness.", "We believe that this compression failure is caused by incorrect parse results, because HiSAN-Dep employs the features obtained from the dependency tree in the pipeline procedure.", "As reported in recent papers (Klerke et al., 2016; Wang et al., 2017), the F 1 scores of Tagger match or exceed those of the Seq2Seq-based methods.", "The compressed sentence of the first example in Table 3 output by Tagger is ungrammatical.", "We believe that this is mainly because Tagger cannot consider the predicted labels of the previous words.", "Tagger-ILP outputs grammatically incorrect compressed sentences in both examples.", "This result indicates that THE ILP constraint based on the parent-child relationships between words is in-sufficient to generate fluent sentences.", "Compared with these baselines, HiSAN ( d = { 1 , 2 , 4 } ) output compressed sentences that were fluent and had higher informativeness.", "This observation, which confirmed our expectations, is supported by the automatic and human evaluation results.", "We confirm that the compression performance of HiSAN actually improves if the sentences have deep dependency trees.", "Table 4 shows the automatic evaluation results for sentences with deep dependency trees.", "We can observe that HiSAN 1723 Pakistan signed a resolution on Monday to import 1,300 MW of electricity from Kyrgyz Republic and Tajikistan to overcome power shortage in summer season... 1.0 1.0 1.0 0.1 0.9 1.0 1.0 0.08 0.91 0.01 1.0 1.0 1.0 0.02 0.47 0.16 0.33 1.0 1.0 1.0 1.0 1.0 1.0 0.30 0.01 0.69 1.0 1.0 1.0 1.0 1.0 Figure 6: An example compressed sentence and its dependency graph of HiSAN d = { 1 , 2 , 4 } .", "with higher-order dependency chains has better compression performance if the sentences have deep dependency trees.", "Figure 6 shows a compressed sentence and its dependency graph as determined by HiSAN d = { 1 , 2 , 4 } .", "Almost all arcs with large probabilistic weights are contained in the parsed dependency trees.", "Interestingly, some arcs not contained in the parsed dependency trees connecting words which are connected by the dependency chains in the parsed dependency tree (colored by red).", "Considering the training dataset does not contain such dependency relationships, we can estimate that these arcs are learned in support of compressing sentences.", "This result meets our expectation that the dependency chain information is necessary for compressing sentences accurately.", "Several neural network based methods for sentence compression use syntactic features.", "Filippova et al. (2015) employs the features obtained from automatic parse trees in the LSTM-based encoder-decoder in a pipeline manner.", "Wang et al. (2017) trims dependency trees based on the scores predicted by an LSTM-based tagger.", "Although these methods can consider dependency relationships between words, the pipeline approach and the 1st-order dependency relationship fail to compress longer than average sentences.", "Several recent machine translation studies also utilize syntactic features in Seq2Seq models.", "Eriguchi et al. (2017); Aharoni and Goldberg (2017) incorporate syntactic features of the target language in the decoder part of Seq2Seq.", "Both methods outperformed Seq2Seq without syntactic features in terms of translation quality.", "However, both methods fail to provide an entire parse tree until the decoding phase is finished.", "Thus, these methods cannot track all possible parents for each word within the decoding process.", "Similar to HiSAN, Hashimoto and Tsuruoka (2017) use dependency features as attention distributions, but different from HiSAN, they use pre-trained dependency relations, and do not take into account the chains of dependencies.", "Marcheggiani and Titov (2017); Bastings et al. (2017) consider higher-order dependency relationships in Seq2Seq by incorporating a graph convolution technique (Kipf and Welling, 2016) into the encoder.", "However, the dependency information of the graph convolution technique is still given in pipeline manner.", "Unlike the above methods, HiSAN can capture higher-order dependency features using d -length dependency chains without relying on pipeline processing.", "In this paper, we incorporated higher-order dependency features into Seq2Seq to compress sentences of all lengths.", "Experiments on the Google sentence compression test data showed that our higher-order syntactic attention network (HiSAN) achieved the better performance than baseline methods on F 1 as well as ROUGE-1,2 and L scores 83.2, 82.9, 75.8 and 82.7, respectively.", "Of particular importance, challenged with longer than average sentences, HiSAN outperformed the baseline methods in terms of F 1 , ROUGE-1,2 and L scores.", "Furthermore, HiSAN also outperformed the previous methods for both readability and informativeness in human evaluations.", "From the evaluation results, we conclude that HiSAN is an effective tool for the sentence compression task." ]
[ "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "result", "abstain", "abstain", "result" ]
[ "State-of-the-art pre-trained language models have been shown to memorise facts and perform well with limited amounts of training data.", "To gain a better understanding of how these models learn, we study their generalisation and memorisation capabilities in noisy and low-resource scenarios.", "We find that the training of these models is almost unaffected by label noise and that it is possible to reach near-optimal results even on extremely noisy datasets.", "However, our experiments also show that they mainly learn from high-frequency patterns and largely fail when tested on low-resource tasks such as few-shot learning and rare entity recognition.", "To mitigate such limitations, we propose an extension based on prototypical networks that improves performance in low-resource named entity recognition tasks.", "With recent advances in pre-trained language models (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019; He et al., 2020), the field of natural language processing has seen improvements in a wide range of tasks and applications.", "Having acquired general-purpose knowledge from large amounts of unlabelled data, such methods have been shown to learn effectively with limited labelled data for downstream tasks (Howard and Ruder, 2018) and to generalise well to out-of-distribution examples (Hendrycks et al., 2020).", "Previous work has extensively studied what such models learn, e.g. the types of relational or linguistic knowledge (Tenney et al., 2019; Jawahar et al., 2019; Rogers et al., 2020).", "However, the process of how these models learn from downstream data and the qualitative nature of their learning dynamics remain unclear.", "Better understanding of the learning processes in these widely-used models is Work done prior to joining Google.", "needed in order to know in which scenarios they will fail and how to improve them towards more robust language representations.", "The fine-tuning process in pre-trained language models such as BERT (Devlin et al., 2019) aims to strike a balance between generalisation and memorisation.", "For many applications it is important for the model to generaliseto learn the common patterns in the task while discarding irrelevant noise and outliers.", "However, rejecting everything that occurs infrequently is not a reliable learning strategy and in many low-resource scenarios memorisation can be crucial to performing well on a task (Tu et al., 2020).", "By constructing experiments that allow for full control over these parameters, we are able to study the learning dynamics of models in conditions of high label noise or low label frequency.", "To our knowledge, this is the first qualitative study of the learning behaviour of pre-trained transformer-based language models in conditions of extreme label scarcity and label noise.", "We find that models such as BERT are particularly good at learning general-purpose patterns as generalisation and memorisation become separated into distinct phases during their fine-tuning.", "We also observe that the main learning phase is followed by a distinct performance plateau for several epochs before the model starts to memorise the noise.", "This makes the models more robust with regard to the number of training epochs and allows for noisy examples in the data to be identified based only on their training loss.", "However, we find that these excellent generalisation properties come at the cost of poor performance in few-shot scenarios with extreme class imbalances.", "Our experiments show that BERT is not able to learn from individual examples and may never predict a particular label until the number of training instances passes a critical threshold.", "For example, on the CoNLL03 (Sang and De Meulder, 2003) dataset it requires 25 instances of a class to 7564 learn to predict it at all and 100 examples to predict it with some accuracy.", "To address this limitation, we propose a method based on prototypical networks (Snell et al., 2017) that augments BERT with a layer that classifies test examples by finding their closest class centroid.", "The method considerably outperforms BERT in challenging training conditions with label imbalances, such as the WNUT17 (Derczynski et al., 2017) rare entities dataset.", "Our contributions are the following: 1) We identify a second phase of learning where BERT does not overfit to noisy datasets.", "2) We present experimental evidence that BERT is particularly robust to label noise and can reach near-optimal performance even with extremely strong label noise.", "3) We study forgetting in BERT and verify that it is dramatically less forgetful than some alternative methods.", "4) We empirically observe that BERT completely fails to recognise minority classes when the number of examples is limited and we propose a new model, ProtoBERT, which outperforms BERT on few-shot versions of CoNLL03 and JNLPBA , as well as on the WNUT17 dataset.", "Several studies have been conducted on neural models' ability to memorise and recall facts seen during their training.", "Petroni et al. (2019) showed that pre-trained language models are surprisingly effective at recalling facts while Carlini et al. (2019) demonstrated that LSTM language models are able to consistently memorise single out-of-distribution (OOD) examples during the very first phase of training and that it is possible to retrieve such examples at test time.", "Liu et al. (2020) found that regular-ising early phases of training is crucial to prevent the studied CNN residual models from memorising noisy examples later on.", "They also propose a regularisation procedure useful in this setting.", "Similarly, Li et al. (2020) analyse how early stopping and gradient descent affect model robustness to label noise.", "Toneva et al. (2019), on the other hand, study forgetting in visual models.", "They find that models consistently forget a significant portion of the training data and that this fraction of forgettable examples is mainly dependent on intrinsic properties of the training data rather than the specific model.", "In contrast, we show that a pretrained BERT forgets examples at a dramatically lower rate compared to a BiLSTM and a non-pretrained variant.", "Memorisation is closely related to generalisation: neural networks have been observed to learn simple patterns before noise (Arpit et al., 2017) and generalise despite being able to completely memorise random examples (Zhang et al., 2017).", "Zhang et al. (2021) also show that our current understanding of statistical learning theory cannot explain the superhuman generalisation performance of large neural models across many areas of study.", "Hendrycks et al. (2020) show that pre-trained models generalise better on out-of-distribution data and are better able to detect such data compared to non-pretrained methods but that they still do not cleanly separate inand out-of-distribution examples.", "Kumar et al. (2020) find that pre-trained methods such as BERT are sensitive to spelling noise and typos.", "In contrast to noise in the input, we focus on the models' learning dynamics in the presence of label noise and find that pre-trained methods are remarkably resilient to such cases.", "We investigate the performance of pre-trained language models in specific adverse conditions.", "In order to evaluate generalisation abilities, we first create datasets with varying levels of label noise by randomly permuting some of the labels in the training data.", "This procedure allows us to pinpoint noisy examples and evaluate the performance on clean and noisy datapoints separately.", "Then, in order to investigate memorisation we train the models on datasets that contain only a small number of examples for a particular class.", "This allows us to evaluate how well the models are able to learn from individual datapoints as opposed to high-frequency patterns.", "We make the code for the experiments available online.", "1 Datasets We focus on the task of named entity recognition (NER) and employ the CoNLL03 (Sang and De Meulder, 2003), the JNLPBA (Col-lier and Kim, 2004), and the WNUT17 (Derczynski et al., 2017) datasets.", "NER is commonly used for evaluating pre-trained language models on structured prediction and its natural class imbalance is well suited for our probing experiments.", "CoNLL03 and JNLPBA are standard datasets for NER and Bio-NER respectively.", "The WNUT17 dataset is motivated by the observation that state-of-the-art methods tend to memorise entities during training 1 https://github.com/Michael-Tanzer/ BERT-mem-lowres 7565 (Augenstein et al., 2017).", "The dataset focuses on identifying unusual or rare entities at test time that cannot be simply memorised by the model.", "We evaluate based on entity-level F 1 unless stated otherwise.", "Language models We use BERT-base (Devlin et al., 2019) as the main language model for our experiments, as BERT is widely used in practice and other variations of pre-trained language models build on a similar architecture.", "The model is augmented with a classification feed-forward layer and fine-tuned using the cross-entropy loss with a learning rate of 10 4 .", "AdamW (Loshchilov and Hutter, 2019) is used during training with weight decay of 0.01 and a linear warm-up rate of 10%.", "The test results are recorded using the model that produced the highest validation metrics.", "We compare BERT's behaviour with that of other pre-trained transformers such as RoBERTa (Liu et al., 2019) and DeBERTa (He et al., 2020) fine-tuned with the same optimiser and hyper-parameters as above.", "In order to also compare against non-transformer models, we report performance for a bi-LSTM-CRF (Lample et al., 2016) model with combined character-level and word-level representations.", "The model is comprised of 10 layers, with 300-dimensional word representations and 50-dimensional character representations, for a total of approximately 30 million trainable parameters.", "In our experiments, the model is trained with the Adam optimiser (Kingma and Ba, 2014) and a learning rate of 10 4 for 100 epochs using a CRF loss (Lafferty et al., 2001).", "We first investigate how BERT learns general patterns from datasets that contain label noise.", "Figure 1 shows how the model performance on the CoNLL03 training and validation sets changes when faced with varying levels of noise, from 0% to 50%.", "Based on the progression of performance scores, we can divide BERT's learning process into roughly three distinct phases:", "1. Fitting : The model uses the training data to learn how to generalise, effectively learning simple patterns that can explain as much of the training data as possible (Arpit et al., 2017).", "Both the training and validation performance rapidly increase as the model learns these patterns.", "2. Settling : The increase in performance plateaus and neither the validation nor the training performance change considerably.", "The duration of this phase seems to be inversely proportional to the amount of noise present in the dataset.", "3. Memorisation : The model rapidly starts to memorise the noisy examples, quickly improving the performance on training data while degrading the validation performance, effectively over-fitting to the noise in the dataset.", "A second phase of learning We find BERT to exhibit a distinct second settling phase during which it does not over-fit.", "A resilience to label noise has been observed in other neural networks trained with gradient descent (Li et al., 2020).", "However, we find this phase to be much more prolonged in BERT compared to models pre-trained on other 7566 modalities such as a pre-trained ResNet fine-tuned on CIFAR10 , which immediately starts memorising noisy examples (see Appendix A for a compar-ison).", "These results indicate that the precise point of early stopping is not as important when it comes to fine-tuning pre-trained language models.", "Similar optimal performance is retained for a substantial period, therefore training for a fixed number of epochs can be sufficient.", "We illustrate BERT's behaviour by evaluating the token-level classification accuracy of noisy examples in Figure", "2. During the second phase, BERT completely ignores the noisy tokens and correctly misclassifies them, performing worse than a random classifier.", "The step-like improvements during the third stage show that the model is unable to learn any patterns from the noise and improves by repeatedly optimising on the same examples, gradually memorising them.", "Robustness to noise We also observe in Figure 1 that BERT is extremely robust to noise and over-fitting in general.", "In the absence of noise, the model does not over-fit and maintains its development set performance, regardless of the length of training.", "Even with a large proportion of noise, model performance comparable to training on the clean dataset can be achieved by stopping the training process somewhere in the second phase.", "2 We also hypothesise that due to the robustness to noise shown in the second phase of training, a noise detector can be constructed based only on BERT's training losses, without requiring any other information.", "We find that a simple detector that clusters the losses using k-means reliably achieves over 90% noise-detection F 1 score in all our experiments, further showing how the model is able to actively detect and reject single noisy examples (see Appendix E for details about the noise detection process).", "Impact of pre-training The above properties can mostly be attributed to BERT's pre-training processafter large-scale optimisation as a language model, the network is primed for learning general patterns and better able to ignore individual noisy examples.", "We find that a randomly initialised model with the same architecture does not only achieve lower overall performance but crucially does not exhibit's BERT's distinct second phase of 2 Adding 30% noise to the CoNLL03 dataset causes only a 0.9% decrease of validation performance in the second phase.", "Other pre-trained transformers We also analyse the behaviour of other pre-trained transformers for comparison.", "Specifically, studying RoBERTa and DeBERTa, we find the same training pattern that was observed in BERTall models show a clear division into the three phases described above.", "These models are also all very robust to label noise during the settling phase of training.", "Notably, RoBERTa is even more resilient to label noise compared to the other two analysed models, despite DeBERTa outperforming it on public benchmarks (He et al., 2020).", "Training and validation performance visualisations, such as those in Figure 1, can be found for both models in Appendix I. 5 Forgetting of learned information Evaluating only the final model does not always provide the full picture regarding datapoint memorisation, as individual datapoints can be learned and forgotten multiple times during the training process.", "Following Toneva et al. (2019), we record a forgetting event for an example at epoch t if the model was able to classify it correctly at epoch t 1 , but not at epoch t .", "Similarly, we identify a learning event for an example at epoch t if the model was not able to classify it correctly at epoch t 1 , but it is able to do so at epoch t .", "A first learning event thus happens at the first epoch when a model is able to classify an example correctly.", "We furthermore refer to examples with zero and more than zero forgetting events as unforgettable and forgettable examples, respectively, while the set of learned examples includes all examples with one or more learning events.", "In Table 1, we show the number of forgettable, unforgettable, and learned examples on the training data of the CoNLL03 and JNLPBA datasets for BERT, a non-pre-trained BERT, and a bi-LSTM model.", "We also show the ratio between forgettable and learned examples, which indicates how easily a model forgets learned information.", "We can observe that BERT forgets less than other models and that pre-training is crucial for retaining important information.", "We show the most forgettable examples in Appendix D, which tend to be atypical examples of the corresponding class.", "Toneva et al. (2019) found that the number of forgetting events remains comparable across different architectures for the vision modality, given 7567 Dataset Model Forgettable N f Unforgettable N u Learned N l N f /N l (%) CoNNL03 bi-LSTM 71.06% 29.94% 90.90% 78.17% non-pre-trained BERT 9.89% 90.11% 99.87% 9.90% pre-trained BERT 2.97% 97.03% 99.80% 2.98% JNLPBA bi-LSTM 97.16% 5.14% 98.33% 98.81% non-pre-trained BERT 25.50% 74.50% 98.24% 25.96% pre-trained BERT 16.62% 83.38% 98.18% 16.93% Table 1: Number of forgettable, unforgettable, and learned examples during BERT training on the CoNLL03 dataset and JNLPBA dataset.", "a particular dataset.", "3 However, our experiments show that the same does not necessarily hold for pre-trained language models.", "Specifically, there is a large discrepancy in the ratio between forgettable and learned examples for BERT ( 3%) and a bi-LSTM model ( 80%) on the CoNLL03 dataset.", "We additionally analyse the distribution of first learning events throughout BERT's training on CoNLL03 with label noise between 0% and 50% (Figure 3) and notice how BERT learns the majority of learned examples during the first epochs of training.", "As the training progresses, we see that BERT stops learning new examples entirely, regardless of the level of noise for the third and fourth epochs.", "Finally, in the last epochs BERT mostly memorises the noise in the data.", "4 3 They report proportions of forgettable examples for MNIST, PermutedMNIST, CIFAR10, and CIFAR100 as 8.3%, 24.7%, 68.7%, and 92.38% respectively.", "4 We conducted additional experiments on other datasets (see Appendix F for results on the JNLPBA dataset).", "In all cases we observe the same distribution of first learning events throughout training.", "In the previous sections, we have observed that BERT learns examples and generalises very early in training.", "We will now examine if the same behaviour applies in low-resource scenarios where a minority class is only observed very few times.", "To this end, we remove from the CoNLL03 training set all sentences containing tokens with the minority labels MISC and LOC except for a predetermined number of such sentences.", "We repeat the process for the JNLPBA dataset with the DNA and Protein labels.", "We conduct similar experiments to the previous sections by studying how different numbers of sentences containing the target class affect BERT's ability to learn and generalise.", "We report in Figure 4 the training and validation classification F 1 score for the CoNLL03 datasets from which all but few (5 to 95) sentences containing the LOC label were removed.", "Note that the reported performance in this experiment refers to the LOC class only.", "In Figure 5 we also report the distribution of first learning 7568 1 2 3 4 5 6 7 8 9 10 Epoch 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 R a t i o o f t r a i n i n g e x a m p l e s BERT phase 1 phase 2 phase 3 Figure 5: First learning events distribution during the training on the CoNLL03 dataset with varying number of sentences containing the LOC class.", "events for the LOC class in the same setting.", "Two phenomena can be observed: 1) reducing the number of sentences greatly reduces the model's ability to generalise (validation performance decreases yet training performance remains comparable); and 2) when fewer sentences are available, they tend to be learned in earlier epochs for the first time.", "Corresponding experiments on the MISC label can be found in Appendix J. We also show the average entity-level F 1 score on tokens belonging to the minority label and the model performance for the full NER task (i.e. considering all classes) for the CoNLL03 and JNLPBA datasets in Figures 6 and 7 respectively.", "For the CoNLL03 dataset, we observe that BERT needs at least 25 examples of a minority label in order to be able to start learning it.", "Performance rapidly improves from there and plateaus at around 100 examples.", "For the JNLPBA dataset, the minimum number of examples increases to almost 50 and the plateau occurs for a higher number of examples.", "On the challenging WNUT17 dataset, BERT achieves only 44% entity-level F 1 .", "This low performance is attributable to the absence of entity overlap between training set and test set, which increases the inter-class variability of the examples.", "In order to address BERT's limitations in few-shot learning, we propose a new model, ProtoBERT that combines BERT's pre-trained knowledge with the few-shot capabilities of prototypical networks (Snell et al., 2017) for sequence labelling problems.", "The method builds an embedding space where the inputs are clustered on a per-class basis, allowing us to classify a token by finding its closest centroid and assigning it the corresponding class.", "The model can be seen in Figure 8.", "We first define a support set S , which we use as context for the classification and designate with S k all elements of S that have label k .", "We refer to the set of points that we want to classify as the query set Q , with l ( Q i ) indicating the label of the i th element in Q .", "We will also refer to f as the function computed by BERT augmented with a linear layer, which produces an M dimensional output.", "The model then classifies a given input x as follows: for each class k , we compute the centroid of the class in the learned feature space as the mean of all the elements that belong to class k in the 7569 Trainingdata Query set Supportset Query embeddings Support embeddings Class centroidsSimilarities Output probabilities Similarity Per-classaverage S Softmax Randomsampler BERT Mapping layer Figure 8: Schematic representation of the inference using a BERT model with a prototypical network layer.", "support set S : c k = 1 | S k | (cid:88) x i S k f ( x i ) (1) Then, we compute the distance from each input x Q to each centroid: dist k = d ( f ( x ) , c k ) and collect them in a vector v R k .", "Finally, we compute the probability of x belonging to class k as p ( y = k | x ) = exp ( d ( f ( x ) , c k )) (cid:80) k (cid:48) exp ( d ( f ( x ) , c k (cid:48) )) = = softmax ( v ) k The model is trained by optimising the cross-entropy loss between the above probability and the one-hot ground-truth label of x .", "Crucially, S and Q are not a fixed partition of the training set but change at each training step.", "Following Snell et al. (2017), we use Euclidean distance as a choice for the function d .", "In order to take into account the extreme underrepresentation of some classes, we create the support by sampling s 1 elements from each minority class and s 2 elements from each non-minority class.", "A high ratio s 1 /s 2 gives priority to the minority classes, while a low ratio puts more emphasis on the other classes.", "We then similarly construct the query set with a fixed ratio n between the minority classes and the non-minority classes.", "For NER, rather than learning a common representation for the negative class O , we only want the model to treat it as a fallback when no other similar class can be found.", "For this reason, we define the vector of distances v as follows: v = ( d O , dist 0 , . . . , dist k ) where d O is a scalar parameter of the network that is trained along with the other parameters.", "(i.e. class O ) when it is not close enough to any centroid, where d O represents the threshold for which we consider a point close enough.", "If no example of a certain class is available in the support set during the training, we assign a distance of 400 , making it effectively impossible to mistakenly classify the input as the missing class during that particular batch.", "Finally, we propose two ways to compute the class of a token at test time.", "The first method employs all examples from X to calculate the centroids needed at test time, which produces better results but is computationally expensive for larger datasets.", "The second method approximates the centroid c k using the moving average of the centroids produced at each training step: c ( t ) k c ( t ) k (1 ) c ( t 1) k where is a weighting factor.", "This method results in little overhead during training and only performs marginally worse than the first method.", "We first compare ProtoBERT to the standard pretrained BERT model with a classification layer on the CoNLL03 and JNLPBA datasets with a smaller number of sentences belonging to the minority classes.", "We show the results on the few-shot classes and for the full dataset for CoNLL03 in Figures 9 and 10 respectively.", "Similarly, we show the results for the few-shot class for JNLPBA in Figure 11.", "5 In all cases ProtoBERT consistently surpasses the performance of the baseline when training on few examples of the minority class.", "It particularly excels in the extreme few-shot setting, e.g. outperforming BERT by 40 F 1 points with 15 sentences containing the LOC class.", "As the number of available examples of the minority class increases, 5 A comparison on the full classification task can be found in Appendix H. 7570 0 25 50 75 100 125 150 175 200 Number of training sentences containing the target label 0.0 0.2 0.4 0.6 0.8 T e s t F 1 s c o r e MISC ProtoBERT MISC BERT + Class.", "BERT starts to match ProtoBERT's performance and outperforms it on the full dataset in some cases.", "While the main strength of ProtoBERT is on few-shot learning, we evaluate it also on the full CoNLL03 , JNLPBA and WNUT17 datasets (with-out removing any sentences) in Table", "2. In this setting, the proposed architecture achieves results mostly similar to the baseline while considerably outperforming it on the WNUT17 dataset of rare entities.", "The results in this section show that ProtoBERT, while designed for few-shot learning, performs at least on par with its base model in all tasks.", "This allows the proposed model to be applied to a much wider range of tasks and datasets without negatively affecting the performance if no label imbalance is 0 25 50 75 100 125 150 175 200 Number of training sentences containing the target label 0.0 0.1 0.2 0.3 0.4 0.5 T e s t F 1 s c o r e DNA ProtoBERT DNA BERT + Class.", "present, while bringing a substantial improvement in few-shot scenarios.", "We conduct an ablation study to verify the effect of our improved centroid computation method.", "From the results in Table 2 we can affirm that, while a difference in performance does exist, it is quite modest (0.10.4%).", "On the other hand, this method reduces the training time and therefore energy consumption (Strubell et al., 2019) to one third of the original method on CoNLL03 and we expect the reduction to be even greater for larger datasets.", "In this study, we investigated the learning process during fine-tuning of pre-trained language models, focusing on generalisation and memorisation.", "By formulating experiments that allow for full control over the label distribution in the training data, we study the learning dynamics of the models in conditions of high label noise and low label frequency.", "The experiments show that BERT is capable of reaching near-optimal performance even when a large proportion of the training set labels has been corrupted.", "We find that this ability is due to the model's tendency to separate the training into three distinct phases: fitting, settling, and memorisation, which allows the model to ignore noisy examples in the earlier epochs.", "The pretrained models experience a prolonged settling phase when fine-tuned, during which their performance remains optimal, indicating that the precise area of early stopping is less crucial.", "Furthermore, we show that the number of avail-7571 Model CoNLL03 JNLPBA WNUT17 State of the art 93.50 77.59 50.03 BERT + classification layer (baseline) 89.35 75.36 44.09 ProtoBERT 89.87 73.91 48.62 ProtoBERT + running centroids 89.46 73.54 48.56 Table 2: Comparison between the baseline model, the current state-of-the-art 6 and the proposed architecture on the CoNLL03 , JNLPBA and WNUT17 datasets evaluated using entity-level F 1 score.", "able examples greatly affects the learning process, influencing both when the examples are memorised and the quality of the generalisation.", "We show that BERT fails to learn from examples in extreme few-shot settings, completely ignoring the minority class at test time.", "To overcome this limitation, we augment BERT with a prototypical network.", "This approach partially solves the model's limitations by enabling it to perform well in extremely low-resource scenarios and also achieves comparable performance in higher-resource settings.", "Michael is funded by the UKRI CDT in AI for Healthcare 7 (Grant No. P/S023283/1)." ]
[ "abstain", "method", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "objective", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "objective", "method", "abstain", "result", "abstain", "other", "abstain", "result", "objective", "abstain", "other" ]
[ "Based on the recently proposed transferable dialogue state generator (TRADE) (Wu et al., 2019) that predicts dialogue states from utterance-concatenated dialogue context, we propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model as an auxiliary task for task-oriented dialogue state generation.", "By enabling the model to learn a better representation of the long dialogue context, our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long.", "In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.", "Dialogue state tracking (DST, also known as belief tracking) predicts user's goals in task-oriented dialogue system, where dialogue states are normally represented in the form of a set of slot-value pairs.", "A variety of approaches to dialogue state tracking are devoted to dealing with two different settings: DST over a predefined domain ontology and DST with slot-value candidates from an open vocabulary.", "Most of the previous work is based on the first setting, assuming that all possible slot-value candidates are provided in a domain ontology in advance.", "The task of the dialogue state tracking with this setting is therefore largely simplified to score all predefined slot-value pairs and select the value with the highest score for each slot as the final prediction.", "Although predefined ontology-based approaches are successfully used on datasets with small ontologies, such as DSTC2 (Henderson et al., 2014) and WOZ2.0 (Wen et al., 2017), they are Corresponding author quite limited in both scalability to scenarios with infinite slot values and prediction of unseen slot values.", "In order to address these issues of DST over predefined ontologies, recent efforts have been made to predict slot-value pairs in open vocabularies.", "Among them, TRADE (Wu et al., 2019) proposes to encode the entire dialogue context and to predict the value for each slot using a copy-augmented decoder, achieving state-of-the-art results on the MultiWOZ 2.0 dataset (Budzianowski et al., 2018).", "As TRADE simply concatenates all the system and user utterances in previous turns into a single sequence as the dialogue context for slot-value prediction, it is difficult for the model to identify whether an utterance in the dialogue context is from system or user when the concatenated sequence becomes long.", "We observe that the longest dialogue context after concatenation on the MultiWOZ 2.0 dataset contains 880 tokens.", "Our experiments also demonstrate that the longer the dialogue context sequence is, the worse TRADE performs.", "To deal with this problem, we propose two approaches to modeling long context for better dialogue state tracking.", "The first method is tagging.", "While constructing the dialogue context sequence, we insert a tag of [sys] symbol in front of each system utterance, and a tag of [usr] symbol in front of each user utterance.", "The purpose of adding such symbolic tags in the concatenated dialogue context sequence is to explicitly enhance the capability of the model in distinguishing system and user utterances.", "In the second method, we propose to integrate a bi-directional language modeling module into the upstream of the model as an auxiliary task to gain better understanding and representation of the dialogue context.", "The bi-directional language modeling task is to predict the next word by using forward hidden states and the previous word by using backward hidden states based on the dialogue context sequence without any annotation.", "With these two approaches, we perform dialogue state tracking in a multi-task learning architecture.", "In summary, the contributions of our work are as follows: We propose a simple tagging method to explicitly separate system from user utterances in the concatenated dialogue context.", "We propose a language modeling task as an auxiliary task to better model long context for DST.", "We conduct experiments on the MultiWOZ 2.0 dataset.", "Both methods achieve significant improvements over the baselines in all evaluation metrics.", "The joint of the two methods establish a new state-of-the-art results on the MultiWOZ 2.0.", "In addition, we provide a detailed analysis on the improvements achieved by our methods.", "Predefined ontology-based DST assumes that all slot-value pairs are provided in an ontology.", "Mrksic et al. (2017) propose a neural belief tracker (NBT) to leverage semantic information from word embeddings by using distributional representation learning for DST.", "An extension to the NBT is then proposed by Mrksic and Vulic (2018), which learns to update belief states automatically.", "Zhong et al. (2018) use slot-specific local modules to learn slot features and propose a global-locally self-attentive dialogue state tracker (GLAD).", "Nouri and Hosseini-Asl (2018) propose GCE model based on GLAD by using only one recurrent networks with global conditioning.", "Ramadan et al. (2018) introduce an approach that fully utilizes semantic similarity between dialogue utterances and the ontology terms.", "Ren et al. (2018) propose StateNet which generates a fixed-length representation of the dialogue context and compares the distances between this representation and the value vectors in the candidate set for making prediction.", "These predefined ontology-based DST approaches suffer from their weak scalability to large ontologies and cannot deal with previously unseen slot values.", "In open vocabulary-based DST, Xu and Hu (2018) propose a model that learns to predict unknown values by using the index-based pointer network for different slots.", "Wu et al. (2019) apply an encoder-decoder architecture to generate dialogue states with the copy mechanism.", "However, their method simply concatenates the whole dialogue context as input and does not perform well when the dialogue context is long.", "We study this problem and propose methods to help the DST model better model long context.", "Inspired by Zhou et al. (2019) who use an additional language model in question generation, we attempt to incorporate language modeling into dialogue state tracking as an auxiliary task.", "In this section, we describe our proposed methods.", "First, section 3.1 briefly introduces the recent TRADE model (Wu et al., 2019) as background knowledge, followed by our methods: utterance tagging in section 3.2 and multi-task learning with language modeling in section 3.3.", "TRADE is an encoder-decoder model that encodes concatenated previous system and user utterances as dialogue context and generates slot value word by word for each slot exploring the copy mechanism (Wu et al., 2019).", "The architecture of TRADE is shown in Figure 1 without the language model module.", "In the encoder of TRADE, system and user utterances in previous dialogue turns are simply concatenated without any labeling.", "In our experiments, we find that the performance of the TRADE model significantly drops when the length of the dialogue context is long.", "On the MultiWOZ 2.0 dataset, the maximum length of a dialogue context is up to 880 tokens.", "About 27% of instances on the test set have dialogue context sequences longer than 200 tokens.", "The joint accuracy of the TRADE on these cases drops to lower than 22%.", "This suggests that TRADE suffers from long context.", "To deal with this problem, we first propose a simple method to label system and user utterances by inserting a tag of [sys] just at the beginning of each system utterance and a tag of [usr] in front of each user utterance when they are concatenated into the dialogue context.", "We conjecture that mixing system and user utterances in one single sequence may confuse the encoder.", "It may also mislead the decoder to attend to inappropriate parts and the copy network to copy from wrong utterances.", "The [sys] Hello [usr] I want cheap hotels.", "explicit indicators from the two tags are to help TRADE differ system from user utterances.", "We further propose to incorporate a bi-directional language modeling module into the dialogue state tracking model in a multi-task learning framework for DST, which is shown in Figure 1.", "The bi-directional language modeling module is to predict the next word and the previous word in the concatenated sequence with the forward and the backward GRU network respectively.", "We first feed the concatenated dialogue context into the embedding layer.", "We initialize each word embedding in the dialogue context by concatenating Glove embedding (Pennington et al., 2014) and character embedding (Hashimoto et al., 2017).", "This word embedding sequence is then fed into a bi-directional GRU network to get the hidden representations h lmt and h lmt in two directions, which are used to predict the next and the previous word through a softmax layer as follows: P lm ( w t +1 | w <t +1 ) = softmax ( W f h lmt ) (1) P lm ( w t 1 | w >t 1 ) = softmax ( W b h lmt ) (2) The loss function is defined as the sum of the negative log-likelihood of the next and previous words in the sequence.", "The language modeling loss L lm is therefore calculated as follows ( T is the length of the concatenated dialogue context sequence): L lm = T 1 (cid:88) t =1 log ( P lm ( w t +1 | w <t +1 )) T (cid:88) t =2 log ( P lm ( w t 1 | w >t 1 )) (3) The sum of the forward and backward hidden states in the language model module is used as the hidden representation h lmt for word w t in the dialogue context: h lmt = h lmt + h lmt .", "We further sum it with the word embedding of w t and feed the sum into the utterance encoder.", "Following Wu et al. (2019), we include the slot gate and state generator modules in our model and calculate the dialogue state tracking loss L dst .", "The training objective for the multi-task learning framework is to minimize the total loss L total which is the sum of DST and language modeling loss: L total = L dst + L lm (4) where is a hyper-parameter which is used to bal-ance the two tasks.", "In this section, we evaluated our proposed methods on the public dataset.", "We conducted experiments on the MultiWOZ 2.0 (Budzianowski et al., 2018) which is the largest multi-domain task-oriented dialogue dataset, consisting of over 10,000 dialogues from seven domains.", "Each dialogue is composed of 13.68 turns on average.", "Following Wu et al. (2019), we used five domains excluding hospital and police domains which account for a small portion and do not appear on the test set.", "In our multi-task learning model, both the sizes of hidden states and word embeddings were set to 400.", "We set the batch size to 8 and applied the delay update mechanism with different step sizes to train the model.", "Joint accuracy and slot accuracy are the two metrics we used to evaluate the performance on dialogue state tracking.", "Table 1 shows the results of our methods and other baselines on the test set of the MultiWOZ 2.0 dataset.", "Our full model (tagging Length Total Correct Turns Joint Accuracy(%) TRADE Ours TRADE Ours 0 99 2,940 2,115 2,190 (+75) 71.94 74.49 (+2.55) 100-199 2,466 1,028 1,129 (+101) 41.69 45.78 (+4.09) 200-299 1,494 356 445 (+89) 23.83 29.79 (+5.96) (cid:62) 300 468 57 70 (+13) 12.18 14.96 (+2.78) Table 2: Results and statistics on different lengths of dialogue context on the test set.", "Model Total Correct Not exactly correct Over pred.", "Partial pred.", "False pred.", "TRADE 7,368 3,556 791 1,480 1,541 Ours 7,368 3,834 (+278) 877 (+86) 1,201 (-279) 1,456 (-85) Table 3: Statistics and analysis on different types of prediction errors.", "+ language modeling) significantly outperforms several previous state-of-the-art models, including TRADE, and achieves new state-of-the-art results, 52.04% of joint accuracy and 97.26% of slot accuracy on the MultiWOZ 2.0.", "The tagging alone (-LM) can improve the joint accuracy on the MultiWOZ 2.0 by 1.53% while the auxiliary language modeling (-Tagging) by 2.74%.", "Figure 2 shows the impact of and the number of delay update steps on DST.", "Consequently, our model performs best when we set to 0.9 and the number of delay update steps to 4.", "We further provide a deep analysis on our results on the MultiWOZ 2.0 according to the length of concatenated dialogue context, which are shown in Table 2.", "We can clearly observe that the performance of the baseline model drops sharply with the increase of the dialogue context length.", "We can also find that our model performs better than the baseline in all cases, suggesting that the proposed methods are able to improve modeling long dialogue context for DST.", "Table 3 shows the statistics of different kinds of prediction errors on the test set of the MultiWOZ 2.0.", "We define three types of dialogue state prediction errors.", "Over prediction is that the predicted states not only fully cover the golden states, but also include some redundant slot values.", "Partial prediction is an error that the predicted states are just part of the golden states with some slot values missing.", "False prediction denotes that false slot values are predicted for some slots.", "As shown in Table 3, our model significantly reduces the number of partial and false prediction errors, with the help of better representation of dialogue context.", "In this paper, we have presented the utterance tagging and auxiliary bi-directional language modeling in a multi-task learning framework to model long dialogue context for open vocabulary-based DST.", "Experiments on the MultiWOZ 2.0 dataset show that our model significantly outperforms the baselines and achieves new state-of-the-art results.", "The present research was supported by the National Natural Science Foundation of China (Grant No. 61861130364) and the Royal Society (Lon-don) (NAF \\ R1 \\ 180122).", "We would like to thank the anonymous reviewers for their insightful comments." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "method", "abstain", "objective", "abstain", "method", "objective", "objective", "method", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "other", "other" ]
[ "User language data can contain highly sensitive personal content.", "As such, it is imperative to offer users a strong and interpretable privacy guarantee when learning from their data.", "In this work, we propose SentDP: pure local differential privacy at the sentence level for a single user document.", "We propose a novel technique, D eep C andidate, that combines concepts from robust statistics and language modeling to produce high-dimensional, general-purpose -SentDP document embeddings.", "This guarantees that any single sentence in a document can be substituted with any other sentence while keeping the embedding -indistinguishable.", "Our experiments indicate that these private document embeddings are useful for downstream tasks like sentiment analysis and topic classification and even outperform baseline methods with weaker guarantees like word-level Metric DP.", "Language models have now become ubiquitous in NLP (Devlin et al., 2019; Liu et al., 2019b; Alsentzer et al., 2019), pushing the state of the art in a variety of tasks (Strubell et al., 2018; Liu et al., 2019a; Mrini et al., 2021).", "While language models capture meaning and various linguistic properties of text (Jawahar et al., 2019; Yenicelik et al., 2020), an individual's written text can include highly sensitive information.", "Even if such details are not needed or used, sensitive information has been found to be vulnerable and detectable to attacks (Pan et al., 2020; Abdalla et al., 2020; Carlini et al., 2020).", "Reconstruction attacks (Xie and Hong, 2021) have even successfully broken through private learning schemes that rely on encryption-type methods (Huang et al., 2020).", "As of now, there is no broad agreement on what constitutes good privacy for natural language (Kairouz et al., 2019).", "Huang et al. (2020) argue that different applications and models require Figure 1: x and x yield z R d with similar probability.", "different privacy definitions.", "Several emerging works propose to apply Metric Differential Privacy (Alvim et al., 2018) at the word level (Feyisetan et al., 2019; Feyisetan and Kasiviswanathan, 2021; Carvalho et al., 2021; Qu et al., 2021; Yue et al., 2021; Xu et al., 2021) .", "They propose to add noise to word embeddings, such that they are indistinguishable from their nearest neighbours.", "At the document level, however, the above definition has two areas for improvement.", "First, it may not offer the level of privacy desired.", "Having each word indistinguishable with similar words may not hide higher level concepts in the document, and may not be satisfactory for many users.", "Second, it may not be very interpretable or easy to communicate to end-users, since the privacy definition relies fundamentally on the choice of embedding model to determine which words are indistinguishable with a given word.", "This may not be a clear and precise enough for end-users to grasp.", "In this work, we propose a new privacy definition for documents: sentence privacy.", "This guarantee is both strong and interpretable: any sentence in a document must be indistinguishable with any other sentence.", "A document embedding is sentence-private if we can replace any single sentence in the document and have a similar probability of producing the same embedding.", "As such, the embedding only stores limited information unique to any given sentence.", "This definition is easy to communicate and strictly stronger than word-level definitions, as modifying a sentence can be changing one word.", "Although this definition is strong, we are able to produce unsupervised, general embeddings of documents that are useful for downstream tasks like sentiment analysis and topic classification.", "To achieve this we propose a novel privacy mechanism, D eep C andidate, which privately samples a high-dimensional embedding from a preselected set of candidate embeddings derived from public, non-private data.", "D eep C andidate works by first pre-tuning a sentence encoder on public data such that semantically different document embeddings are far apart from each other.", "Then, we approximate each candidate's Tukey Depth within the private documents' sentence embeddings.", "Deeper candidates are the most likely to be sampled to represent the private document.", "We evaluate D eep C andidate on three illustrative datasets, and show that these unsupervised private embeddings are useful for both sentiment analysis and topic classification as compared to baselines.", "In summary, this work makes the following contributions to the language privacy literature: 1. A new, strong, and interpretable privacy definition that offers complete indistinguishability to each sentence in a document.", "2. A novel, unsupervised embedding technique, D eep C andidate, to generate sentence-private document embeddings.", "3. An empirical assessment of D eep C andidate, demonstrating its advantage over baselines, delivering strong privacy and utility.", "Then, the space of all documents is X = S and document x X is written as x = ( s 1 , s 2 , . . . , s k ) for any non-negative integer k of sentences.", "In this work, we focus on cohesive documents of sentences written together like reviews or emails, but our methods and guarantees apply to any sequence of sentences, such as a collection of messages written by an individual over some period of time.", "Our task is to produce an embedding z R d of any document x X such that any single sentence s i x is indistinguishable with every other sentence s i S\\ s i .", "That is, if one were to replace any single sentence in the document s i x with any other sentence s i S\\ s i , the probability of producing a given embedding z is similar.", "To achieve this, we propose a randomized embedding function (the embedding mechanism ) M : X R d , that generates a private embedding z = M ( x ) that is useful for downstream tasks.", "The above privacy notion is inspired by Differential Privacy (DP) (Dwork, 2006).", "It guarantees that whether an individual participates (dataset D ) or not (dataset D ) the probability of any output only chances by a constant factor.", "Definition 2.1 (Differential Privacy) .", "Given any pair of datasets D, D D that differ only in the information of a single individual, we say that the mechanism A : D O , satisfies -DP if Pr[ A ( D ) O ] e Pr[ A ( D ) O ] for any event O O .", "Note that we take probability over the randomness of the mechanism A only, not the data distribution.", "DP has several nice properties that make 3368 it easy to work with including closure under postprocessing, an additive privacy budget (composi-tion), and closure under group privacy guarantees (guarantees to a subset of multiple participants).", "See Dwork et al. 2014 for more details.", "When our output space is a discrete and finite set of alternatives to choose from O = ( o 1 , o 2 , . . . , o n ) , we may use the exponential mechanism to satisfy -DP (McSherry and Talwar, 2007).", "To do so, we specify a utility function over input/output pairs, u : D O R .", "The utility of choosing alternative o O when the input is dataset D D is then given by u ( D, o ) .", "The sensitivity of u ( , ) is the worst-case change in utility over pairs of neighboring datasets, u = max D,D ,o | u ( D, o ) u ( D , o ) | .", "Natural Language Privacy.", "Previous work has demonstrated that NLP models and embeddings are vulnerable to reconstruction attacks (Carlini et al., 2020; Abdalla et al., 2020; Pan et al., 2020).", "In response there have been various efforts to design privacy-preserving techniques and definitions across NLP tasks.", "A line of work focuses on how to make NLP model training satisfy DP (Kerrigan et al., 2020; Bagdasaryan et al., 2019).", "This is distinct from our work in that it satisfies central DP where data is first aggregated non-privately and then privacy preserving algorithms (i.e. training) are run on that data.", "We model this work of the local version of DP (Dwork et al., 2006), wherein each individual's data is made private before centralizing.", "Our definition guarantees privacy to a single document as opposed to a single individual.", "A line of work more comparable to our approach makes documents locally private by generating a randomized version of a document that satisfies some formal privacy definition.", "As with the private embedding of our work, this generates locally private representation of a given document x .", "The overwhelming majority of these methods satisfy an instance of Metric-DP (Alvim et al., 2018) at the word level (Feyisetan et al., 2019; Feyisetan and Kasiviswanathan, 2021; Carvalho et al., 2021; Qu et al., 2021; Yue et al., 2021; Xu et al., 2021).", "As discussed in the introduction, this guarantees that a document x is indistinguishable with any other document x produced by swapping a single word in x with a similar word.", "Two words are similar' if they are close in the word embeddings space (e.g. GloVe).", "This guarantee is strictly weaker than our proposed definition, SentDP, which offers indistinguishability to any two documents that differ in an entire sentence.", "Privacy-preserving embeddings.", "There is a large body of work on non-NLP privacy-preserving embeddings, as these embeddings have been shown to be vulnerable to attacks (Song and Raghunathan, 2020).", "Li and Clifton (2021) attempt to generate locally private embeddings by bounding the embedding space, and we compare with this method in our experiments.", "Kamath et al. (2019) propose a method for privately publishing the average of embeddings, but their algorithm is not suited to operate on the small number of samples (sentences) a given document offers.", "Finally, Beimel et al. (2019) propose a method for privately learning halfspaces in R d , which is relevant to private Tukey Medians, but their method would restrict input examples (sentence embeddings) to a finite discrete set in R d , a restriction we cannot tolerate.", "We now introduce our simple, strong privacy definition, along with concepts we use to satisfy it.", "In this work, we adopt the local notion of DP (Dwork et al., 2006), wherein each individual's data is guaranteed privacy locally before being reported and centralized.", "Our mechanism M receives a single document from a single individual, x X .", "We require that M provides indistinguishability between documents x, x differing in one sentence .", "Definition 3.1 (Sentence Privacy, SentDP) .", "Given any pair of documents x, x X that differ only in one sentence, we say that a mechanism M : X O satisfies -SentDP if Pr[ M ( x ) O ] e Pr[ M ( x ) O ] for any event O O .", "We focus on producing an embedding of the given document x , thus the output space is O = R d .", "For instance, consider the neighboring documents x = ( s 1 , s 2 , . . . , s k ) and x = ( s 1 , s 2 , . . . , s k ) that differ in the second sentence, i.e. s 2 , s 2 can be 3369 any pair of sentences in S 2 .", "This is a strong notion of privacy in comparison to existing definitions across NLP tasks.", "However, we show that we can guarantee SentDP while still providing embeddings that are useful for downstream tasks like sentiment analysis and classification.", "In theory, a SentDP private embedding z should be able to encode any information from the document that is not unique to a small subset of sentences.", "For instance, z can reliably encode the sentiment of x as long as multiple sentences reflect the sentiment.", "By the group privacy property of DP, which SentDP maintains, two documents differing in a sentences are a indistinguishable.", "So, if more sentences reflect the sentiment, the more M can encode this into z without compromising on privacy.", "Our approach is to produce a private version of the average of general-purpose sentence embeddings.", "By the post-processing property of DP, this embedding can be used repeatedly in any fashion desired without degrading the privacy guarantee.", "Our method makes use of existing pre-trained sentence encoding models.", "We denote this general sentence encoder as G : S R d .", "We show in our experiments that the mean of sentence embeddings, g ( x ) = (cid:88) s i x G ( s i ) , (1) maintains significant information unique to the document and is useful for downstream tasks like classification and sentiment analysis.", "We call g ( x ) the document embedding since it summarizes the information in document x .", "While there exist other definitions of document embeddings (Yang et al., 2016; Thongtan and Phien-thrakul, 2019; Bianchi et al., 2020), we decide to use averaging as it is a simple and established embedding technique (Bojanowski et al., 2017; Gupta et al., 2019; Li et al., 2020).", "Definition 3.2.", "Given a distribution P over R d , the Tukey Depth of a point y R d is TDP ( y ) = inf w R d P { y : w ( y y ) 0 } .", "In other words, take the hyperplane orthogonal to vector w , h w , that passes through point y .", "Let P w 1 be the probability under P that a point lands on one side of h w and let P w 2 be the probability that a point lands on the other side, so P w 1 + P w 2 = 1 .", "y is considered deep if min( P w 1 , P w 2 ) is close to a half for all vectors w (and thus all h passing through y ).", "The Tukey Median of distribution P , TMED ( P ) , is the set of all points with maximal Tukey Depth, TMED ( P ) = arg max y R d TDP ( y ) .", "and the median, TMED ( Y ) , maximizes the depth and is at most half the size of our sample (cid:4) n (cid:5) .", "Generally, finding a point in TMED ( Y ) is hard; SOTA algorithms have an exponential dependency in dimension (Chan, 2004), which is a non-starter when working with high-dimensional embeddings.", "However, there are efficient approximations which we will take advantage of.", "While useful and general, the document embedding g ( x ) does not satisfy SentDP.", "We now turn to describing our privacy-preserving technique, D eep C andidate, which generates general, -SentDP document embeddings that preserve relevant information in g ( x ) , and are useful for downstream tasks.", "To understand the nontrivial nature of this problem, we first analyze why the simplest, straightfoward approaches are insufficient.", "Motivation.", "Preserving privacy for high dimensional objects is known to be challenging (Kamath et al., 2019; Feyisetan and Kasiviswanathan, 2021; Zhou et al., 2009) .", "For instance, adding Laplace noise directly to g ( x ) , as done to satisfy some privacy definitions (Feyisetan et al., 2019; Alvim et al., 2018), does not guarantee SentDP for any .", "Recall that the embedding space is all of R d .", "A change in one sentence can lead to an unbounded change in g ( x ) , since we do not put any restrictions on the general encoder G .", "Thus, no matter how much noise we add to g ( x ) we cannot satisfy SentDP.", "a limited set such as a sphere or hypercube as done in prior work (Li and Clifton, 2021; Abadi et al., 2016).", "In doing so, we bound how far apart embeddings can be for any two sentences, G ( s i ) G ( s i ) 1 , thus allowing us to satisfy SentDP by adding finite variance noise.", "However, such schemes offer poor utility due to the high dimensional nature of useful document embeddings (we confirm this in our experiments).", "We must add noise with standard deviation proportional to the dimension of the embedding, thus requiring an untenable degree of noise for complex encoders like BERT which embed into R 768 .", "Our method has three pillars: (1) sampling from a candidate set of public, non-private document embeddings to represent the private document, (2) using the Tukey median to approximate the document embedding, and (3) pre-training the sentence encoder, G , to produce relevant candidates with high Tukey depth for private document x .", "Instead of having our mechanism select a private embedding z from the entire space of R d , we focus the mechanism to select from a set of m candidate embeddings, F , generated by m public, non-private documents.", "We assume the document x is drawn from some distribution over documents X .", "For example, if we know x is a restaurant review, may be the distribution over all restaurant reviews.", "F is then a collection of document embeddings over m publicly accessible documents x i , F = { f i = g ( x i ) : x 1 , . . . , x m iid } , and denote the corresponding distribution over f i as g ( ) .", "By selecting documents F to be similar in nature to the private document x , we inject an advantageous inductive bias into our mechanism, which is critical to satisfy strong privacy while preserving meaningful information relevant to x .", "We now propose a novel mechanism MTD , which approximates g ( x ) by sampling a candidate embedding from F .", "MTD works by concentrating probability on candidates with high Tukey Depth w.r.t. the set of sentence embeddings S x = { G ( s i ) : s i x } .", "We model sentences s i from document x as i.i.d. draws from distribution x .", "Then, S x is k draws from g ( x ) , the distribution of sentences from x passing through G .", "Deep points are a good approximation of the mean under light assumptions.", "If g ( x ) belongs to the set of halfspace-symmetric distributions (including all elliptic distributions e.g. Gaussians), we know that its mean lies in the Tukey Median (Zhu et al., 2020).", "Formally, MTD is an instance of the exponential mechanism (Definition 2.2), and is defined by its utility function.", "We set the utility of a candidate document embedding f i F to be an approximation of its depth w.r.t. sentence embeddings S x , u ( x, f i ) = (cid:99) TDS x ( f i ) .", "The approximation (cid:99) TDS x , which we detail in the Appendix, is necessary for computational efficiency.", "If the utility of f i is high, we call it a deep candidate' for sentence embeddings S x .", "The more candidates sampled (higher m ), the higher the probability that at least one has high depth.", "Without privacy, we could report the deepest candidate, z = arg max f i F (cid:99) TDS x ( f i ) .", "However, when preserving privacy with MTD , increasing m has diminishing returns.", "To see this, fix a set of sentence embeddings S x for document x and the i.i.d. distribution over candidate embeddings f i g ( ) .", "This induces a multinomial distribution over depth, u j ( x ) = Pr[ u ( x, f i ) = j ] , k 2 (cid:88) j =0 u j ( x ) = 1 , where randomness is taken over draws of f i .", "For candidate set F and sentence embeddings S x , the probability of MTD 's selected candidate, z , having (approximated) depth j is given by Pr[ u ( x, z ) = j ] = a j ( x ) e j / 2 (cid:80) k 2 j =0 a j ( x ) e j/ 2 (4) where a j ( x ) is the fraction of candidates in F with depth j w.r.t. the sentence embeddings of document x , S x .", "For m sufficiently large, a j ( x ) concentrates around u j ( x ) , so further increasing m does not increase the probability of MTD sampling a deep candidate.", "For numerical intuition, suppose m = 5000 (as in our experiments), b candidates have depth j , and all other candidates have depth 0, MTD will sample one of these deep candidates w.p. 0 .", "95 under the settings in Table 1. For low < 10 (high privacy), about 1% of candidates need to have high depth ( 3) in order to be 3371 Table 1: Conditions for deep candidates b j 3 55 5 6 25 3 10 5 2 23 1 1 Figure 3: G is trained to encourage similar documents to embed close together and different documents to embed far apart.", "reliably sampled.", "Note that this is only possible for documents with 6 sentences.", "For higher 10 , MTD will reliably sample low depth candidates even if there are only a few.", "From these remarks we draw two insights on how D eep C andidate can achieve high utility.", "(1) More sentences A higher k enables greater depth, and thus a higher probability of sampling deep candidates with privacy.", "We explore this effect in our experiments.", "(2) Tuned encoder By tuning the sentence encoder G for a given domain, we can modify the distribution over document embeddings g ( ) and sentence embeddings g ( x ) to encourage deep candidates (high probability u j for deep j ) that are relevant to document x .", "So far, we have identified that deep candidates from F can approximate g ( x ) .", "To produce a good approximation, we need to ensure that 1) there reliably exist deep candidates for any given set of sentence embeddings S x , and 2) that these deep candidates are good representatives of document x .", "The general sentence encoder G used may not satisfy this out of the box'.", "If the distribution on document embeddings g ( ) is very scattered around the instance space R 768 , it can be exceedingly unlikely to have a deep candidate f i among sentence embeddings S x .", "On the other hand, if distribution g ( ) is tightly concentrated in one region (e.g. before training' in Figure 3), then we may reliably have many deep candidates, but several will be poor representatives of the document embedding g ( x ) .", "To prevent this, we propose an unsupervised, efficient, and intuitive modification to the (pretrained) sentence encoder G .", "We freeze the weights of G and add additional perceptron layers mapping into the same embeddings space H : R d R d , producing the extended encoder G = H G .", "Broadly, we train H to place similar document embeddings close together, and different embeddings far part.", "To do so, we leverage the assumption that a given domain's distribution over document embeddings g ( ) can be parameterized by n c clusters, visualized as the black circles in Figure 3. H 's aim is to recode sentence embeddings such that document embedding clusters are preserved, but spaced apart from each other.", "By preserving clusters, we are more likely to have deep candidates (increased probability u j for high depth j ).", "By spacing clusters apart, these deep candidates are more likely to come from the same or a nearby cluster as document x , and thus be good representatives.", "Note that H is domain-specific: we train separate H encoders for each dataset.", "The final component of D eep C andidate is computing the approximate depth of a candidate for use as utility in the exponential mechanism as in Eq.", "(3).", "We use a version of the approximation algorithm proposed in Gilad-Bachrach and Burges 2012.", "Intuitively, our algorithm computes the one-dimensional depth of each f i among x 's sentence embeddings S x on each of p random projections.", "The approximate depth of f i is then its lowest depth across the p projections.", "We are guaranteed that (cid:99) TDS x ( f i ) TDS x ( f i ) .", "Due to space constraints, we leave the detailed description of the algorithm for the Appendix.", "Proof follows from the fact that (cid:99) TDS x ( f i ) has bounded sensitivity (changing one sentence can", "only change depth of f i by one).", "We expand on this, too, in the Appendix.", "We produce private, general embeddings of documents from three English-language datasets:", "Good Reads (Wan and McAuley, 2018) 60k book reviews from four categories: fantasy, history, romance, and childrens literature.", "Train-48k | Val-8k | Test-4k 20 News Groups (Lang, 1995) 11239 correspondences from 20 different affinity groups.", "Due to similarity between several groups (e.g. comp.os.ms-windows.misc and comp.sys.ibm.pc.hardware ), the dataset is partitioned into nine categories.", "Train-6743k | Val-2247k | Test-2249k IMDB (Maas et al., 2011) 29k movie reviews from the IMDB database, each labeled as a positive or negative review.", "Train-23k | Val-2k | Test-4k To evaluate utility of these unsupervised, private embeddings, we check if they are predictive of document properties.", "For the Good Reads and 20 News Groups datasets, we evaluate how useful the embeddings are for topic classification.", "For IMDB we evaluate how useful the embeddings are for sentiment analysis (positive or negative review).", "Our metric for performance is test-set macro F 1 score.", "For the general encoder, G : S R 768 , we use SBERT (Reimers and Gurevych, 2019), a version of BERT fine-tuned for sentence encoding.", "Sentence embeddings are generated by mean-pooling output tokens.", "In all tasks, we freeze the weights of SBERT.", "The cluster-preserving recoder, H , as well as every classifier is implemented as an instance of a 4-layer MLP taking 768 -dimension inputs and only differing on output dimension.", "We denote an instance of this MLP with output dimension o as MLP o .", "We run 5 trials of each experiment with randomness taken over the privacy mechanisms, and plot the mean along with a 1 standard deviation envelope.", "D eep C andidate: The candidate set F consists of 5k document embeddings from the training set, each containing at least 8 sentences.", "To train G , we find n c = 50 clusters with k -means.", "We train a classifier C dc = MLP r on document embeddings g ( x ) to predict class, where r is the number of classes (topics or sentiments).", "We compare the performance of D eep C andidate with 4 baselines: Non-private , Truncation , Word-level Metric-DP , and Random Guesser .", "Non-private: This demonstrates the usefulness of non-private sentence-mean document embeddings g ( x ) .", "We generate g ( x ) for every document using SBERT, and then train a classifier C nonpriv = MLP r to predict x 's label from g ( x ) .", "Truncation: We adopt the method from Li and Clifton 2021 to truncate (clip) sentence embeddings within a box in R 768 , thereby bounding sensitivity as described at the beginning of Section 4. Laplace noise is then added to each dimension.", "Documents with more sentences have proportionally less noise added due to the averaging operation reducing sensitivity.", "Word Metric-DP (MDP): The method from Feyisetan et al. 2019 satisfies -word-level metric DP by randomizing words.", "We implement MDP to produce a randomized document x , compute g ( x ) with SBERT, and predict class using C nonpriv .", "Random Guess: To set a bottom-line, we show the theoretical performance of a random guesser only knowing the distribution of labels.", "This is addressed in Figures 4a to 4c.", "Here, we observe how the test set macro F 1 score changes with privacy parameter (a lower offers stronger privacy).", "Generally speaking, for local differential privacy, < 10 is taken to be a strong privacy regime, 10 < 20 is moderate privacy, and 25 is weak privacy.", "The truncation baseline mechanism does increase accuracy with increasing , but never performs much better than the random guesser.", "This is to be expected with high dimension embeddings, since the standard deviation of noise added increases linearly with dimension.", "The word-level MDP mechanism performs significantly better than truncation , achieving relatively good performance for 30 .", "There are two significant caveats, however.", "First, is the privacy definition: as discussed in the Introduction, for the same , word-level MDP is strictly weaker than SentDP.", "The second caveat is the level of at which privacy is achieved.", "Despite a weaker privacy definition, the MDP mechanism does not achieve competitive performance until the weak-privacy regime of .", "We suspect this is due to two reasons.", "First, is the fact that the MDP mechanism does not take advantage of contextual information in each sentence as our technique does; randomizing each word independently does not use higher level linguistic information.", "Second, is the fact that the MDP mechanism does not use domain-specific knowledge as our mechanism does with use of relevant candidates and domain specific sentence encodings.", "In comparison, D eep C andidate offers strong utility across tasks and datasets for relatively low values of , even into the strong privacy regime.", "Beyond = 25 , the performance of D eep C andidate tends to max out, approximately 10-15% below the non-private approach.", "This is due to the fact that D eep C andidate offers a noisy version of an approximation of the document embedding g ( x ) it cannot perform any better than deterministically selecting the deepest candidate, and even this candidate may be a poor representative of x .", "We consider this room for improvement, since there are potentially many other ways to tune G and select the candidate pool F such that deep candidates are nearly always good representatives of a given document x .", "This is addressed in Figures 4d to 4f.", "We limit the test set to those documents with k in the listed range on the x-axis.", "We set = 10 , the limit of the strong privacy regime.", "Neither baseline offers performance above that of the random guesser at this value of .", "D eep C andidate produces precisely the performance we expect to see: documents with more sentences result in sampling higher quality candidates, confirming the insights of Section 4.2.", "Across datasets and tasks, documents with more than 10-15 sentences tend to have high quality embeddings.", "We introduce a strong and interpretable local privacy guarantee for documents, SentDP, along with D eep C andidate, a technique that combines principles from NLP and robust statistics to generate general -SentDP embeddings.", "Our experiments confirm that such methods can outperform existing approaches even with with more relaxed privacy guarantees.", "Previous methods have argued that it is virtually impossible to satisfy pure local DP (Feyisetan et al., 2019; Feyisetan and Ka-siviswanathan, 2021) at the word level while capturing linguistic semantics.", "Our work appears to refute this notion at least at the document level.", "proaches (apart from k -means) of capturing the structure of the embedding distribution g ( ) to encourage better candidate selection.", "We also plan to experiment with decoding private embeddings back to documents by using novel candidates produced by a generative model trained on F .", "KC and CM would like to thank ONR under N00014-20-1-2334.", "KM gratefully acknowledges funding from an Amazon Research Award and Adobe Unrestricted Research Gifts.", "We would would also like to thank our reviewers for their insightful feedback." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "result", "objective", "objective", "objective", "other", "method", "method", "other", "objective", "other", "other", "other", "objective", "method", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "method", "method", "other", "other", "other", "objective", "other", "other", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "objective", "other", "other", "other" ]
[ "Although deep learning models have brought tremendous advancements to the field of open-domain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (bor-ing) responses.", "In this work, we propose a framework named Negative Training to minimize such behaviors.", "Given a trained model, the framework will first find generated samples that exhibit the undesirable behavior, and then use them to feed negative training signals for fine-tuning the model.", "Our experiments show that negative training can significantly reduce the hit rate of malicious responses, or discourage frequent responses and improve response diversity.", "End-to-end dialogue response generation can be formulated as a sequence-to-sequence (seq2seq) task: given a dialogue context, the model is asked to generate a high-quality response.", "In recent years, deep learning models, especially seq2seq language generation models (Sutskever et al., 2014; Cho et al., 2014), have brought significant progress to the field of dialogue response generation.", "However, recent research has revealed undesirable behaviors of seq2seq models that are side effects of standard maximum likelihood estimation (MLE) training, such as the generic (boring) response problem (Li et al., 2016), vulnerability to adversarial attacks (Cheng et al., 2018; Belinkov and Bisk, 2017), and the malicious (egregious) response problem (He and Glass, 2019).", "In this work, we propose and explore the negative training framework to correct unwanted behaviors of a dialogue response generator.", "During negative training, we first find or identify input-output pairs for a trained seq2seq model that exhibit some undesirable generation behavior, treat them as bad examples, and use them to feed negative training signals to the model.", "Correspondingly, we regard the training data as good examples and standard MLE training as positive training.", "The idea of negative training is inspired from the way parents might teach their children to use language by incorporating both positive and negative training signals.", "For example, when teaching children how to use love and hate, in addition to using positive examples like I love apples but I hate bananas , they might also point out that saying I hate you to someone is considered impolite.", "In this work, negative training is used to address the malicious response problem and the frequent response problem (to be described in Section 3.2 and 3.3) in open-domain dialogue response generation.", "In our experiments, we show that negative training can significantly reduce the hit rate for malicious responses, or discourage frequent responses and greatly improve response diversity.", "In this work we adopt recurrent neural network (RNN) based encoder-decoder seq2seq models (Sutskever et al., 2014; Cho et al., 2014; Mikolov et al., 2010), which are widely used in NLP applications like dialogue response generation (Li et al., 2016), machine translation (Luong et al., 2015), etc.", "We use x = { x 1 , x 2 , ..., x n } to denote one-hot vector representations of the input sequence, which serves as context or history information (e.g. the previous utterance), y = { y 1 , y 2 , ..., y m } 1 to denote scalar indices of the corresponding reference target sequence, and V as the vocabulary.", "We use to represent the parameters for the seq2seq 1 The last word y m is a <EOS> token which indicates the end of a sentence.", "model, and P ( y | x ) as the model's generative distribution.", "On the encoder side, every x t will be first mapped into its corresponding word embedding x embt .", "Then { x embt } are input to a long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) RNN to get a sequence of latent representations { h enct } 2 .", "For the decoder, at time t , similarly y t is first mapped to y embt .", "Then a context vector c t , which is supposed to capture useful latent information of the input sequence, needs to be constructed.", "We adopt the attention mechanism for context vector construction: first an attention mask vector a t (which is a distribution) on the input sequence is calculated to decide which part to focus on, then the mask is applied to the latent vectors to construct c t : c t = (cid:80) ni =1 a t ( i ) h enci .", "We use the formulation of the general type of global attention, described in (Luong et al., 2015), to calculate the mask.", "During baseline training, standard MLE training with stochastic gradient descent (SGD) is used to minimize the negative log-likelihood (NLL) of the reference target sentence given the input sentence in the data: LMLE ( P data ; ) = E ( x , y ) P data ( log P ( y | x )) = E ( x , y ) P data ( m (cid:88) t =1 log P ( y t | y <t , x )) (1) where y <t refers to { y 0 , y 1 , ..., y t 1 } , in which y 0 is set to a begin-of-sentence token <BOS> .", "We consider two popular ways of decoding (gen-erating) a sentence given an input: greedy decoding and sampling.", "In practice for dialogue response generation, greedy decoding will provide stable and reproducible outputs, but is severely affected by the generic response problem.", "Sampling will provide more diverse but less predictable responses, and thus give rise to the malicious response problem.", "The negative training framework 3 is a two-stage process.", "Given a trained model, we put it under a 2 Here h refers to the output layer of LSTM, not the cell memory layer.", "3 Our code is available at https://github.mit.", "edu/tianxing/negativetraining_acl2020 debugging environment P test which provides test input samples 4 , get the model's decoded samples and decide (using well-defined criteria) whether each input-output pair exhibits some undesirable behavior.", "Then, these bad pairs are used to provide negative training signals.", "Negative training can be derived from Empirical Bayes Risk Minimization (Och, 2003).", "Specifically, the overall objective is to minimize the expected risk that the model exhibits undesirable decoding behavior: LNEG ( P test ; ) = E x P test E y P ( y | x ) c ( x , y ) (2) where c ( x , y ) refers to the binary criteria that will be 1 if ( x , y ) exhibits undesirable behavior, and 0 otherwise.", "LNEG ( P test ; ) = E x P test E y P ( y | x ) c ( x , y ) log P ( y |", "Compared to LMLE in eq.", "(1), which maximizes the log-likelihood of training data samples, LNEG minimizes the log-likelihood of undesirable model samples.", "This is the reason why we call it Negative Training.", "In our preliminary experiments, we find that negative training needs to be augmented with the standard MLE objective LMLE , encouraging the model to retain its original performance: L NEG+POS = LNEG + POSLMLE (4) In our experiments, we find POS can be simply set to 0.1 to work well.", "In the next two sections, we discuss how the general negative training framework is tailored for the malicious response problem and frequent response problem, respectively.", "4 Note that here test does not refer to the test data.", "First a list of malicious target sentences are created, then the gibbs-enum algorithm 5 is called to find trigger input that will cause the model to assign large probability to the target sequence.", "The following hit types are defined: o-greedy-hit: A trigger input sequence is found such that the model generates the target sentence from greedy decoding.", "o-sample-min/avg-hit: A trigger input sequence is found such that the model generates the target sentence with an minimum/average word log-probability larger than a given threshold T out .", "io-sample-min/avg-hit: In addition to the definition of o-sample-min/avg-hit , we also require that the average log-likelihood of the trigger input sequence, measured by a LM, is larger than a threshold T in .", "This enforces the trigger input to be more likely to be input by real-world users.", "T out is set to the trained seq2seq model's average word log-likelihood on the test data, and T in is set to be a reasonable LM's 6 average word log-likelihood on the test set.", "The intuition is that the model should not assign larger probabilities to the malicious sentences than the reference sentences in the test set.", "Note that these hit types act as criteria c ( x , y ) , indicating whether a target sentence is hit by a trigger input.", "typical seq2seq model trained by MLE has around a 10% hit rate for malicious targets w.r.t. sample-min/avg-hit , across data-sets.", "However, very few malicious targets are hit w.r.t. greedy-hit , so in this work, we focus on the malicious response problem for sampling during decoding.", "In Table 1 we show pairs of trigger inputs and the malicious target sentences w.r.t io-sample-min-hit , for the baseline model on Ubuntu data.", "Now we apply the negative training framework, and aim to reduce the hit rate of a trained model for a given list of malicious targets.", "During each iteration of negative training, for every target sentence y target , we first call the gibbs-enum algorithm to find the trigger input x trigger .", "And if the target is 5 For this paper to be self-contained, we describe the gibbs-enum algorithm in Appendix A. 6 A LSTM language model (LM) is trained on the same training data (regarding each response as an independent sentence).", "Algorithm 1 Negative Training for the Malicious Response Problem Input: Target list Y target , model parameter , learning rate , criterion for hit c , and training data D train for y target in Y target do Get x trigger for y target using the gibbs-enum algorithm.", "hit ( c ( x trigger , y target ) = 1 ), we update the model to reduce the log-likelihood P ( y target | x trigger ) .", "The process is formulated in Algorithm 1 7 .", "For each trigger input, multiple iterations of negative updates are usually needed before the hit criterion is no longer met.", "Note that in each iteration, the gibbs-enum algorithm is called again to find a new trigger input for each target.", "In our experiments, we show that negative training effectively reduces the hit rate for malicious targets after each iteration, and eventually, the gibbs-enum algorithm can no longer find trigger inputs for a large number of targets that were initially hits.", "The generic response problem (Li et al., 2016) for end-to-end dialogue response generation refers to the typical behavior of a MLE trained model, whereby the generated responses are mostly safe,", "7 Note that in actual implementation, the algorithm is mini-batch based.", "boring or uninformative (such as i don't know or good idea ).", "However, it is difficult to invent an automatic criterion to determine whether a response is generic or not.", "In this work, we focus on the frequent response problem, as a sub-problem of the generic response problem.", "It refers to the behavior that a trained model generates exactly the same (usually boring) response, with a high frequency.", "We propose to use a metric called max-ratio to measure how severe the frequent response problem is.", "Given a test set and a decoding method, the model will generate a set of responses, and max-ratio is defined to be the ratio of the most frequent response.", "In our experiments, the baseline models have a max-ratio of around 0 .", "3 for response like I don't know across different data-sets, showing the severity of the frequent response problem.", "During negative training for frequent response, first a threshold ratio r thres is selected (such as 0.01), and responses with frequency ratio larger than r thres will be discouraged.", "For each iteration, the model's response to each training data input sentence is monitored and responses with frequency larger than r thres will be used as negative examples.", "The frequency statistics are calculated using the current and the last 200 mini-batches.", "The procedure is formulated in Algorithm", "2. Note that positive training is also needed here for the model to retain its original performance.", "Algorithm 2 Negative Training for the Frequent Response Problem Input: Model parameter , threshold ratio r thres , learning rate , and training data set D train for ( x pos , y pos ) in D train do Generate response y sample from the model.", "In our experiments, it is shown that negative training significantly reduces max-ratio for the model on test data, and greatly increases the diversity of the model's responses.", "We conduct experiments on three publicly available conversational dialogue data-sets: Ubuntu, Switchboard, and OpenSubtitles.", "To save space, descriptions of the data-sets are provided in Appendix B. 4.1 Baseline Model Training For all data-sets, we first train an LSTM based LM and attention based seq2seq models with one hidden layer of size 600, and the embedding size is set to 300.", "For Switchboard a dropout layer with rate 0.3 is added to the model because over-fitting is observed.", "The mini-batch size is set to 64 and we apply SGD training with a fixed starting learning rate (LR) for 10 iterations, and then another 10 iterations with LR halving.", "For Ubuntu and Switchboard, the starting LR is 1, while a starting LR of 0.1 is used for OpenSubtitles.", "The results are shown in Appendix C. After negative training, in addition to measuring the hit rate for malicious targets or the diversity of the responses, it is also important to check whether the original sample quality of the baseline model is damaged.", "Towards that end, the perplexity of the model before and after negative training will be compared, we also conduct human evaluation to measure whether the sample quality is decreased.", "Other popular measurements, such as the BLEU score, have been found to correspond poorly with human judgements (Liu et al., 2016).", "Nevertheless, we also find that the model's BLEU score does not become worse after negative training.", "Following (He and Glass, 2019), a list of malicious targets are created to test whether negative training can teach the model not to generate sentences in the list.", "However, in addition to prevent the model from generating targets in a specific list, it is also important to check whether negative training generalizes to other malicious targets.", "So, a test target list which contains similar but different targets from the training list are also created to test generalization.", "The training and test lists each contain 0.5k targets.", "It is also interesting to investigate whether using more malicious targets for negative training can lower the hit rate on the test list.", "Towards that end, we train a seq2seq paraphrase model using the paraNMT data-set (Wieting and Gimpel, 2017), Train Paraphrase Test you are broken you 're broken are you broken i will kill i 'll kill myself i 'm going to kill you are bad you 're bad you are really bad you are stupid you 're stupid you are so stupid you shut up shut your mouth can you shut up Table 2: Examples of malicious targets in the training list, the test list, and paraphrases of the training targets which will be used for augmentation.", "with a model of the same structure as described in Section", "2. Then, the paraphrase model is used to generate paraphrases of the malicious targets in the training target list 8 for augmentation.", "In our experiments, the training list without augmentation is first used for negative training, then it is augmented with 0.5k or 2k paraphrased targets respectively (1 or 4 paraphrase copies for each training target sen-tence).", "Samples of the malicious targets are shown in Table", "2. The same training, augmented training and test list are used for all three data-sets, and there is no sequence-level overlap between training lists (augmented or not) and the test list.", "In our experiments, we spotted a harmful side effect of negative training where frequent words in the training target list are severely penalized and sometimes receive low probability even in normal perplexity testing, especially for experiments with small POS .", "To alleviate this problem, we use a simple technique called frequent word avoiding (FWA): negative gradients are not applied to the most frequent words in the malicious training target list 9 .", "For example, when doing negative training against the target i hate you <EOS> , only hate will get a negative gradient.", "For all data-sets, negative training (Algorithm 1) is executed on the (trained) baseline model for 20 iterations over the training target list.", "A fixed learning rate of 0.01 and a mini-batch size of 100 are used.", "POS is set to 0.1 for Ubuntu, and to 1 for Switchboard and OpenSubtitles.", "The main results are shown in Table", "3. For Switchboard we focus on sample-avg-hit because we find very few targets are hit w.r.t. sample-min-hit (Similar results are reported in (He and Glass, 2019)), while for Ubuntu and OpenSubtitles we focus on sample-min-hit .", "Note that we get very similar results w.r.t. sample-avg-hit for 8 Note the training and test lists are manually created.", "9 The exact avoiding word set used is { <EOS>, you, i, me, are, to, do } .", "We first observe that, for all data-sets, negative training can effectively reduce the hit rate on the training target list to less than 5% with little or no degradation on perplexity.", "We provide a comparison of the model's behavior in Appendix D. Also, significant hit rate reduction is achieved on the test target list, which has no overlap with the training target list.", "This shows that negative training, similar to traditional positive training, also generalizes.", "It is also shown that training list augmentation can further reduce the malicious target hit rate consistently for both training and test lists.", "For example, on Ubuntu data, the hit rate after negative training w.r.t. o-sample-min-hit is 12.6%, and can be reduced to 0% with paraphrase augmentation.", "We find that that the model's generation behavior in non-adversarial setting is almost the same as the baseline after negative training.", "For example, the 10-best list from beam search before/after neg-train has larger than 90% overlap.", "We also find that the model generates similar samples (shown in Appendix G).", "We believe the reason is that negative training focuses on making the model more robust with the adversarial inputs, and the original generation behavior is kept intact by the positive training (Equation 4).", "In this section we report results where the negative training framework (Section 3.3) is applied to tackle the frequent response problem.", "For all datasets, negative training is executed for 20 iterations on the MLE trained model over the training data, with a selected r thres .", "A fixed learning rate of 0.001 is used for all three data-sets, the mini-batch size is set to 64 and POS is set to", "1. In this work, we focus on improving the model's greedy decoding behavior instead of beam search for the following two reasons: 1) For the baseline models our experiments, we found that beam search gives far worse response diversity than greedy decoding, because it favors short responses (usually only of length one) too much, resulting in a much larger max-ratio ; 2) During training, doing beam search is much more time-consuming than greedy decoding.", "To measure the diversity of the model's generated responses, in addition to max-ratio introduced in Section 3.3, which is specially design for the frequent response problem, we also adopt the entropy metric proposed in (Zhang et al., 2018).", "Given a set of responses from decoding on the test set, Ent-n calculates the entropy of the n-gram distribution: Ent-n = (cid:88) g G n r ( g ) log r ( g ) (5) where G n is the set of all n-grams that appeared in the response set, and r ( g ) refers to the ratio (frequency) of n-gram g w.r.t. all n-grams in the responses set.", "In our experiments with negative training, a harmful side-effect is spotted: during decoding, the model tends to output long and ungrammatical responses such as i do n't know if it 's a real valid deterrent crime crime yeah i 'm satisfied trying not to .", "We believe the reason is that the sentence end token <EOS> gets over penalized during negative training (it appears in every negative example).", "So, we apply the same frequent word avoiding (FWA) technique used in Section 4.2, except that here only the negative gradient for <EOS> is scaled by 0.1 10 .", "In addition to the baseline model, we compare our proposed negative training framework against a 10 We find that scal by zero will result in extremely short responses.", "GAN (Goodfellow et al., 2014a) approach, where a discriminator D is introduced and the generator G tries to fool the discriminator to believe its samples are real data samples: min G max DV ( D, G ) = min G max D { E ( x , y ) P data log D ( x , y )+ E x P data , y G ( | x ) log(1 D ( x , y )) } (6) where the generator G refers to the seq2seq model P .", "The GAN framework is very attractive for tackling the generic response problem (Li et al., 2017; Zhang et al., 2018), because the discriminator can act as a critic to judge whether a response sample is boring.", "We describe the training details and hyper-parameter setting for the GAN approach in Appendix E. We also provide an comparison to the MMI decoding (Li et al., 2016), which is a very popular work in this field.", "We implement MMI-antiLM for our models.", "The experimental results are shown in Table", "4. The experiment with best diversity result and nondegenerate sample quality are shown in bold.", "We first observe a large gap on the diversity measures between the baseline models and the test set, especially on Switchboard and OpenSubtitles data.", "That indicates the severity of the frequent/generic response problem.", "Then, results of negative training with different r thres show that negative training can significantly increase response diversity, with little or no loss on PPL or BLEU score (shown in Appendix F) performance.", "For example, max-ratio is reduced by 73.7% and Ent-3 is increased by 149% for Switchboard data.", "Further, consistent improvement is achieved when a smaller r thres is used.", "However, sample quality will decrease (becoming too long or ungrammatical) when r thres is too small.", "The reason could be that when too much diversity is asked for, the model will go to extremes to provide diversity, resulting in degradation of sample quality.", "Comparing to MMI, note that although on Switchboard/Opensubtitles MMI gives higher entropy, the max-ratio is not as low as the negative training result, which is the main focus of our work (the frequent response problem).", "We also find MMIs hyper-parameters are difficult to tune: the working set of hyper-parameters dont transfer well between data-sets.", "Further, for MMI in a lot of configuration tries the model gives ungrammatical output samples (this is problem is also mentioned in the paper (Li et al., 2016)).", "For the Ubuntu data, we can not even find a configuration that performs better than the baseline model.", "Further, the vanilla GAN approach is not shown to be effective in our experiments.", "The reason could be that despite its discriminative nature, GAN training still feeds positive gradient for samples from the model (eq.", "(11) and eq.", "(12) in Appendix E), which is not enough to prevent the model from generating them.", "We believe additional techniques (Zhang et al., 2018; Li et al., 2017) are needed for the GAN approach to be effective.", "We show some model samples before and after negative training in Table", "5. It is shown that negative training effectively discourages boring responses, and response diversity is improved.", "However, one limitation is observed that diversity does not necessarily lead to improvement on the informativeness of the response w.r.t. the input (some-times the model generates a completely unrelated response).", "More samples for all three data-sets are included in Appendix G. To rigorously verify negative training is not getting diversity when sacrificing the sample's quality, a human evaluation is conducted and results are shown in Table", "6. It is observed that negative training wins by a significant margin for all three data-sets.", "This shows that, negative training does not damage the quality of the generated samples.", "Note that the human evaluation does not reflect the diversity of the model, because the raters only rate one response at a time.", "The malicious response problem and the gibbs-enum algorithm to find trigger inputs (He and Glass, 2019) originates from a large body of work on adversarial attacks for deep learning models, with continuous input space (e.g. image classification) (Goodfellow et al., 2014b; Szegedy et al., 2013), or discrete input space (e.g. sentence classification, or", "seq2seq models) (Papernot et al., 2016; Samanta and Mehta, 2017; Liang et al., 2018; Ebrahimi et al., 2017; Belinkov and Bisk, 2017; Chen et al., 2017).", "Adversarial attacks refer to the phenomenon that when an imperceptible perturbation is applied to the input, the output of the model can change significantly (from correct to incorrect).", "The trigger inputs found by the gibbs-enum algorithm, can be regarded as a type of targeted attack, in which the attack triggers the model to assign large probability to a specific malicious target sentence.", "Motivated by the works on adversarial attacks, various adversarial training strategies (Madry et al., 2017; Belinkov and Bisk, 2017; Miyato et al., 2016) have been proposed to make trained models more robust against those attacks.", "During adversarial training, the model is fed with adversarial examples and the correct labels.", "The negative training framework considered in this work differs from adversarial training in that, instead of asking the model to do the right thing (referred to as positive training in this work), the model is trained to not do the wrong thing.", "To the best of our knowledge, this is the first work investigating the concept of negative training for dialogue response models, and the first proposed solution for the malicious response problem.", "The malicious target list used in this work is very similar to the one used in (He and Glass, 2019).", "We propose to add a test target list to test the generalization of negative training.", "Further, we show that the training list can be effectively augmented by utilizing a paraphrase model.", "In this work, we propose a definition for the frequent response problem , as a sub-problem of the generic response problem (Li et al., 2016).", "Much research work has devoted to alleviate the generic response problem in end-to-end dialogue response generation, (Li et al., 2016) use the maximal mutual information (MMI) objective, and propose to utilize an auxiliary LM to penalize the generic response during decoding.", "Closely related to this work, sophisticated training frameworks based on GAN (Zhang et al., 2018; Li et al., 2017) have also been shown to be effective, where techniques such as variational information maximization or reward for every generation step (REGS) are proposed to improve GAN training.", "However, in our experiments it is shown that a vanilla GAN approach gives unsatisfactory results.", "Whether negative training 11 is complementary to these frameworks is worth investigating in future work.", "Finally, note that the concept of negative training in this work is very different to the negative samples in word2vec training (Mikolov et al., 2013).", "The negative samples in word2vec training are used to prevent the training from being trivial, and is usually chosen randomly.", "In this work, the negative samples are carefully chosen to exhibit some particular undesirable behavior of the model, and is then used to correct such behavior.", "In this work, we propose the negative training framework to correct undesirable behaviors of a trained neural dialogue response generator.", "The algorithm involves two major steps, first input-output pairs that exhibit bad behavior are identified, and then are used for fine-tuning the model as negative training examples.", "We also show that negative training can be derived from an overall objective (eq.", "(2)) to minimize the expected risk of undesirable behaviors.", "In our experiments, we apply negative training to the malicious response problem and the frequent response problem and get significant improvement for both problems." ]
[ "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "result", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "objective", "method", "objective", "objective", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "method", "objective", "abstain", "objective", "abstain", "result" ]
[ "We present a word-sense induction method based on pre-trained masked language models (MLMs), which can cheaply scale to large vocabularies and large corpora.", "The result is a corpus which is sense-tagged according to a corpus-derived sense inventory and where each sense is associated with indicative words.", "Evaluation on English Wikipedia that was sense-tagged using our method shows that both the induced senses, and the per-instance sense assignment, are of high quality even compared to WSD methods, such as Babelfy.", "Furthermore, by training a static word embeddings algorithm on the sense-tagged corpus, we obtain high-quality static senseful embeddings.", "These outperform existing senseful embeddings methods on the WiC dataset and on a new outlier detection dataset we developed.", "The data driven nature of the algorithm allows to induce corpora-specific senses, which may not appear in standard sense inventories, as we demonstrate using a case study on the scientific domain.", "Word forms are ambiguous, and derive meaning from the context in which they appear.", "For example, the form bass can refer to a musical instrument, a low-frequency sound, a type of voice, or a kind of fish.", "The correct reference is determined by the surrounding linguistic context.", "Traditionally, this kind of ambiguity was dealt via word sense disambiguation (WSD), a task that disambiguates word forms in context between symbolic sense-ids from a sense inventory such as WordNet (Miller, 1992) or, more recently, BabelNet (Navigli and Ponzetto, 2010).", "Such sense inventories rely heavily on manual curation, are labor intensive to produce, are not available in specialized domains and inherently unsuitable for words with emerging senses.", "1 This 1 For example, in current WordNet version, Corona has 6 synsets, none of them relates to the novel Coronavirus .", "can be remedied by word sense induction (WSI), a task where the input is a given word-type and a corpus, and the output is a derived sense inventory for that word.", "Then, sense disambiguation can be performed over the WSI-derived senses.", "The introduction of large-scale pre-trained LMs and Masked LMs (MLM) seemingly made WSI/WSD tasks obsolete: instead of representing tokens with symbols that encode sense information, each token is associated with a contextualized vector embeddings that captures various aspects of its in-context semantics, including the word-sense.", "These contextualized vectors proved to be very effective as features for downstream NLP tasks.", "However, contextualized embeddings also have some major shortcomings: most notably for our case, they are expensive to store ( e.g. BERT embeddings are 768 or 1024 floating point numbers for each token), and are hard to index and query at scale.", "Even if we do manage to store and query them, they are not interpretable, making it impossible for a user to query for a particular sense of a word without providing a full disambiguating context for that word.", "For example, consider a user wishing to query a dataset for sentences discussing Oracle in the mythology-prophet sense, rather than the tech company sense.", "It is not clear how to formulate such a query to an index of contextualized word vectors.", "However, it is trivial to do for an index that annotates each token with its derived sense-id (in terms of UI, after a user issues a query such as Or-acle, the system may show a prompt such as did you mean Oracle related to IBM; Sun; Microsoft, or to Prophet; Temple; Queen, allowing to narrow the search in the right direction).", "Amrami and Goldberg (2018, 2019) show how contextualized embeddings can be used for achieving state-of-the-art WSI results.", "The core idea of their WSI algorithm is based on the intuition, first proposed by Baskaya et al. (2013), that occurrences of a word that share a sense, also share in-context 4738 bug Representatives Neighbours bug 0 bug 1 bug 2 bug 3 bug 4 bug 0 bug 1 bug 2 bug 3 bug 4 insect problem feature bomb virus bugs 0 vulnerability 2 bugs 1 bugs 3 flu 2 fly flaws fix device infection beetle 0 glitch patches 2 dumpster staph beetle hole code bite crisis spider 0 rootkit bug 1 laptop 1 hangover Bugs patch dog screen disease snake 1 bugs 1 updates 1 footage 1 nosebleed worm mistake software tag surprise worm 0 virus 2 patch 2 cruiser 3 pain 4 Java chair Representatives Neighbours Representatives Neighbours Java 0 Java 1 Java 0 Java 1 chair 0 chair 1 chair 0 chair 1 Jakarta Eclipse Timor 0 Python 0 head seat Chair 0 stool 0 Indonesia Jo Sumatra 1 JavaScript chairman position chairperson podium 2 Bali Apache Sulawesi Pascal 2 president wheelchair chairman 0 desk 0 Indies software Sumatra 0 SQL presided professor president 0 professorship Holland Ruby Kalimantan library 3 lead table Chairman 0 throne 1 pound train Representatives Neighbours Representatives Neighbours pound 0 pound 1 pound 2 pound 0 pound 1 pound 2 train 0 train 1 train 0 train 1 lb dollar beat lb 0 rupee smash 2 training railway recruit 0 bus 0 foot marks punch pounds 0 shilling kick 1 prepare track equip tram 1 weight coin pump lbs 0 dollar 1 stomp educate rail recruit 1 trains 1 ton Mark crush ton 2 franc slash 0 practice line volunteer 2 carriage 0 kilograms mile attack lbs 1 penny 0 throw 4 qualified railroad retrain coach 3 Figure 1: Examples of induced word-senses for various words.", "substitutes.", "An MLM is then used to derive topk word substitutes for each word, and these substitute-vectors are clustered to derive word senses.", "Our main contribution in this work is proposing a method that scales up Amrami and Goldberg (2018)'s work to efficiently annotate all tokens in a large corpus (e.g. Wikipedia) with automatically derived word-senses.", "This combines the high-accuracy of the MLM-based approach, with the symbolic representation provided by discrete sense annotations.", "The discrete annotations are interpretable (each sense is represented as a set of words), editable, indexable and searchable using standard IR techniques.", "We show two applications of the discrete annotations, the first one is sense-aware information retrieval (7), and the second is high-quality senseful static word embeddings we can derive by training a static embeddings model on the large sense annotated corpus (8).", "We first show how the method proposed by Amrami and Goldberg (2018) can be adapted from deriving senses of individual lemmas to efficiently and cheaply annotating all the corpus occurrences of all the words in a large vocabulary (3).", "Deriving word-sense clusters for all of English Wikipedia words that appear as single-token words in BERT-LARGE 's (Devlin et al., 2019) vocabulary, and assigning a sense to each occurrence in the corpus, required 100 hours of cheap P100 GPUs (5 hours of wall-clock time on 20 single GPU machines) followed by roughly 4 hours on a single 96-cores CPU machines.", "The whole process requires less than 50GB of disk space, and costs less than 150$ on Google Cloud platform.", "After describing the clustering algorithm (4), we evaluate the quality of our system and of the automatic sense tagging using SemEval datasets and a new manually annotated dataset we created (5).", "We show that with the produced annotated corpora it is easy to serve sense-aware information retrieval applications (7).", "Another immediate application is feeding the sense-annotated corpora to a static embedding algorithm such as word2vec (Mikolov et al., 2013), for deriving sense-aware static embeddings (8).", "This results in state-of-the-art sense-aware embeddings, which we evaluate both on an existing WiC benchmark (Pilehvar and Camacho-Collados, 2019) and on a new challenging benchmark which we create (9).", "In contrast to WSD which relies on curated sense inventories, our method is data-driven, therefore resulting senses are corpus dependent.", "The method can be applied to any domain for which a BERT-like model is available, as we demonstrate by applying it to the PubMed Abstracts of scientific papers, using SCIBERT (Beltagy et al., 2019).", "The resulting senses cover scientific terms which are not typically found in standard sense inventories (6).", "Figure 1 shows examples of induced senses for selected words from the English Wikipedia corpus.", "For each sense we list 5 community-based representatives (3), as well as the 5 closest neighbours in the sense-aware embedding space (8).", "Additional examples are available in Appendix A. Code and resources are available in github.com/allenai/WSIatScale.", "Word Sense Induction and Disambiguation Previous challenges like Jurgens and Klapaftis (2013) focused on word sense induction for small sized datasets.", "To the best of our knowledge we are the first to perform large-scale all-words WSI.", "The closest work to our method is the substitution-based method proposed in Amrami and Goldberg (2018, 2019) which is the starting point to our paper.", "In that paper, the authors suggested a WSI algorithm designed for a small dataset (SemEval 2010, 2013) with a predefined set of ambiguous target words (See (3) for more details on the algorithm).", "In our work, we change Amrami and Goldberg (2019) such that we can efficiently run sense induction on all the words in very large corpora.", "An alternative approach for sense tagging is based on Word Sense Disambiguation (WSD).", "The two main WSD methods are Supervised WSD and Knowledge-based WSD.", "Supervised WSD suffers from the difficulty of obtaining an adequate amount of annotated data.", "Indeed, even SemCor, the largest manually annotated tagged corpus, consists of only 226,036 annotated tokens.", "Among different super-visied WSD methods, Zhong and Ng (2010) suggested a SVM based approach and Melamud et al. (2016); Yuan et al. (2016) suggested LSTMs paired with nearest neighbours classification.", "Knowledge-base WSD (Moro et al., 2014; Pasini and Navigli, 2017), on the other hand, avoids the reliance on large annotated word-to-sense corpus and instead maps words to senses from a closed sense inventory ( e.g. WordNet (Miller, 1992), BabelNet (Nav-igli and Ponzetto, 2010)).", "As such, the quality of knowledge-based WSD heavily depends on the availability, quality and coverage of the associated annotated resources.", "Sense Embeddings In 8 we exploit the sense-induced corpus to train sense embeddings.", "Reisinger and Mooney (2010) were the first to suggest creating multiple representations for ambiguous words.", "Numerous recent papers (Chen et al., 2014; Rothe and Schtze, 2015; Iacobacci et al., 2015; Pilehvar and Collier, 2016; Mancini et al., 2017; Iacobacci and Navigli, 2019) aim to produce similar embeddings, all of which use either WordNet or BabelNet as semantic network.", "Our method is similar to Iacobacci et al. (2015), with the difference being that they rely on semantic networks (via Babelfy (Moro et al., 2014)).", "In contrast and similarly to us, Pelevina et al. (2016) does not rely on lexical resources such as WordNet.", "The authors proposed splitting pretrained embeddings (such as word2vec) to a number of prototype sense-embeddings.", "Yet in our work, we directly learn the multi-prototype sense-embeddings which is only possible due to the large-scale corpus annotation.", "When comparing both methods in 9.1 we infer it is better to directly learn multi-prototype sense-embeddings.", "We define large-scale sense induction as deriving sense clusters for all words in a large vocabulary and assigning a sense cluster to each corpus occurrence of these words.", "2 3.2 Algorithm Contextualized BERT vectors contain sense information, and clustering the contextualized vectors results in sense clusters.", "However, storing a 1024 dimensional vector of 32bit floats for each relevant token in the English Wikipedia corpus requires over 8TB of disk-space, making the approach cumbersome and not-scalable.", "However, as shown by Amrami and Goldberg (2019), MLM based word-substitutes also contain the relevant semantic information, and are much cheaper to store: each word-id in BERTLARGE 's vocabulary can be represented by 2 bytes, and storing the top-5 substitutes for each corpus position requires less than 20GB of storage space.", "3 2 In BERT-large-cased-whole-word-masking this corresponds to 16k vocabulary items, that match to 1.59B full words in English Wikipedia, or 92% of all word occurrences.", "Analyzing the remaining words, only 0.01% appear in Wikipedia more than 100 times.", "We derive word senses to a substantial chunk of the vocabulary, which also corresponds to the most ambiguous words as less frequent words are substantially less polysemous (Hernndez-Fernndez et al., 2016; Fenk-Oczlon et al., 2010; Zipf, 1945).", "3 The size can be reduced further using adaptive encoding techniques that assign fewer bits to frequent words.", "We did not implement this in this work.", "In order to perform WSI at scale, we keep the main intuition from Amrami and Goldberg (2019), namely to cluster sparse vectors of lemmas of the top-k MLM-derived word substitutions.", "This results in vast storage saving, and also in a more interpretable representations.", "However, for scalability, we iterate over the corpus sentences and collect the top-k substitutes for all words in the sentence at once based on a single BERT call for that sentence.", "This precludes us from using the dynamic-patterns component of their method, which requires separately running BERT for each word in each sentence.", "However, as we show in Section 5.1 we still obtain sufficiently high WSI results.", "Annotation: We run BERT-large-cased-whole-word-masking on English Wikipedia, inferring substitutes for all corpus positions.", "For positions that correspond to single-token words, 5 we consider the predicted words, filter stop-words, lemmatize the remaining words (Honnibal et al., 2020), and store the top-5 most probable lemmas to disk.", "This step takes 5 hours on 20 cloud-based GPU machines (total of 100 GPU hours), resulting in 1.63B tokens with their corresponding top-5 lemmas.", "Inverted Word Index: We create an inverted index mapping from each single-token word to its corpus occurrences (and their corresponding top-5 lemmas).", "This takes 5 minutes on a 96 cores CPU machine, and 10GB of disk.", "Sense Induction: For each of 16,081 lemmas corresponding to single-token words, we retrieve random 1000 instances, 6 and induce senses using 4 The Wikipedia corpus is based on a dump from August 2020, with text extracted using WikiExtractor (Attardi, 2015).", "5 We exclude single-character tokens, stopwords and punctuation.", "6 The clustering algorithm scales super-linearly with the number of instances.", "To reduce computation cost for tokens that appear more than 1000 times in the dataset, we sample min( numOccur, 1000) instances for each token word, and cluster given the subset of instances.", "We then associate each of the remaining instances to one of the clusters as explained bass 0 bass 1 bass 2 bass 3 bass 4 bassist double fish tenor trap guitar second bottom baritone swing lead tail perch voice heavy drum steel shark soprano dub rhythm electric add singer dance Table 1: Top 5 representatives of the sense-specific communities of word bass .", "This process requires 30 minutes on the 96-core CPU machine, and uses 100MB of disk space.", "The average number of senses per lemma is 3.13.", "Each sense is associated with up to 100 representative words, which represent the highest-degree words in the sense's community.", "Table 1 shows the 5 senses found for the word bass with their top-5 representative words.", "See additional examples in Fig. 1 and Appendix A. Tagging: Each of the remaining word-occurrences is associated with a sense cluster by computing the Jaccard similarity between the oc-currences' top-5 lemmas and the cluster representatives, and choosing the cluster that maximizes this score.", "For example, an occurrence of the word bass with lemmas tenor, baritone, lead, opera, soprano will be associated with bass 3 .", "This takes 100 minutes on 96-core machine, and 25GB of storage.", "We replace the hierarchical clustering algorithm used by Amrami and Goldberg (2018, 2019) with a community-detection, graph-based clustering algorithm.", "One major benefit of the community detection algorithms is that they naturally produces a dynamic number of clusters, and provide a list of interpretable discrete representative lemmas for each cluster.", "We additionally found this method to be more stable.", "Graph-based clustering for word-sense induction typically constructs a graph from word occurrences in the final step of the algorithm.", "or collocations, where the goal is to identify sense-specific sub-graphs within the graph that best induce different senses (Klapaftis and Manandhar, 2008, 2010).", "We instead construct the graph based on word substitutes.", "Following Jurgens (2011), we pose identifying sense-specific clusters as a community detection problem , where a community is defined as a group of connected nodes that are more connected to each other than to the rest of the graph.", "Graph construction For each word w in the vocabulary, we construct a graph G w = ( V w , E w ) where each vertex v V w is a substitute-word predicted by the MLM for w , and an edge ( u, v ) E w connects substitutes that are predicted for the same instance.", "The edge is weighted by the number of instances in which both u and v were predicted.", "More formally, let X = { x iw } ni =1 bet the set of all topk substitutes for n instances of word w , and x iw = { w (cid:48) jx iw } kj =1 represents the k top substitutes for the i th instance of word w .", "The graph G w is defined as follows: V w = { u : i u x iw } E w = { ( u, v ) : i u x iw v x iw } W ( u, v ) = |{ i : ( u, v ) x iw }| Community detection A community in a subgraph corresponds to a set of tokens that tend to co-occur in topk substitutes of many instances, and not co-occur with topk substitutes of other instances.", "This corresponds well to senses and we take community's nodes as sense's representatives.", "We identify communities using the fast Louvain method (Blondel et al., 2008).", "Briefly, Louvain searches for an assignment of nodes to clusters such that the modularity score Q which measures the density of edges inside communities compared to edges between communitiesis maximized: Q = 1 2 m (cid:88) u v (cid:20) W ( u, v ) k u k v 2 m (cid:21) ( c u , c v ) m is the sum of all edge weights in the graph, k u = (cid:80) v W ( u, v ) is the sum of the weights of the edges attached to node u , c u is the community to which u is assigned, and is Kronecker delta function.", "This objective is optimized using an iterative heuristic process.", "For details, see Blondel et al. (2008).", "We start by intrinsically evaluating the WSI clustering method on:", "(a) SemEval 2010 and SemEval 2013; and", "(b) a new test set we develop for large-scale WSI.", "In section 9, we additionally extrinsically evaluate the accuracy of static embeddings derived from a sense-induced Wikipedia dataset.", "When collecting word-substitutes, we lemmatize the top-k list, join equivalent lemmas, remove stopwords and the target word from the list, and keep the top-5 remaining lemmas.", "We evaluate the community-based WSI algorithm on two WSI datasets: SemEval 2010 Task 14 (Man-andhar et al., 2010) and SemEval 2013 Task 13 (Jurgens and Klapaftis, 2013).", "Table 2 compares our method to Amrami and Goldberg (2018, 2019) and AutoSense (Amplayo et al., 2019), which is the second-best available WSI method.", "Bert-noDP/DP are taken from Amrami and Goldberg (2019).", "Bert-DP uses dynamic patterns which precludes wide-scale application.", "We follow previous work (Man-andhar et al., 2010; Komninos and Manandhar, 2016; Amrami and Goldberg, 2019) and evaluate SemEval 2010 using F-Score and V-Measure and SemEval 2013 using Fuzzy Normalized Mutual Information (FNMI) and Fuzzy B-Cubed (FBC) as well as their geometric mean (AVG).", "Our method performs best on SemEval 2010 and comparable to state-of-the-art results on SemEval 2013.", "The algorithm performs on-par with the Bert-noDP method, and does not fall far behind the Bert-DP method.", "We now turn to assess the end-to-end induction and tagging over Wikipedia.", "We evaluate our method on large corpora by randomly sampling 2000 instances from the sense-induced Wikipedia, focusing on frequent words with many senses.", "We manually annotate the sam-ples' senses without access to the automatically induced senses, and then compare our annotations to the system's sense assignments.", "We publicly release our manual sense annotations.", "Sampling and Manual Annotation We used a list of 20 ambiguous words from CoarseWSD-20 (Loureiro et al., 2021).", "The full list and per-word results can be found in Appendix C. For each word we sampled 100 passages from English Wikipedia 4742 Model F-S V-M AVG AutoSense 61.7 9.8 24.59 Bert-noDP 70.9 (0.4) 37.8 (1.5) 51.7 (1.2) Ours 70.95 (0.63) 40.79 (0.19) 53.79 (0.31) Bert-DP 71.3 (0.1) 40.4 (1.8) 53.6 (1.2) Model FNMI FBC AVG AutoSense 7.96 61.7 22.16 Bert-noDP 19.3 (0.7) 63.6 (0.2) 35.1 (0.6) Ours 19.42 (0.39) 61.98 (0.12) 34.69 (0.33) Bert-DP 21.4 (0.5) 64.0 (0.5) 37.0 (0.5) Table 2: Evaluation on the SemEval 2010 (top) and SemEval 2013 (bottom) datasets.", "with the target word, including inflected forms (case insensitive).", "Unlike CoarseWSD-20 , we sampled examples without any respect to a predefined set of senses.", "For example, the only two senses that appear in CoarseWSD-20 for the target word arm are arm (anatomy) , and arm (computing) , leaving out instances matching senses reflecting weapons, subdivisions, mechanical arms etc.", "With the notion that word sense induction systems should be robust to different annotations schemes, we gave two fluent English speakers 100 sentences for each of the 20 ambiguous words from CoarseWSD-20 .", "Annotators were not given a sense inventory.", "Each annotator was asked to label each instance with the matching sense according to their judgment .", "For example, for the target word apple in the sentence The iPhone was announced by Apple CEO.\" , annotators can label the target sense with Apple Inc. , Apple The Company etc. Annotation Guidelines are available in Appendix B. On average annotators labeled 6 . 65 senses per word ( 5 . 85 and 7 . 45 average clusters per word for the two annotators). This is more than the 2 . 65 average senses according to CoarseWSD-20 and less than WordNet's 9 . 85 . Results We report our system's performance alongside two additional methods: A strong baseline of the most frequent sense (MFS), and Babelfy (Moro et al., 2014)the sense disambiguation system used in BabelNet (Tested using Babelfy live version April 2021). Differently from the latter, our system does not disambiguates but induces senses, therefore, clusters are not labeled with a sense tag from a sense inventory. Instead, we represent senses to annotators using a list of common substitute words and a few examples. Thus, after annotating the Wikipedia passages, we additionally asked annotators to name the system's clusters with the same naming convention as in their annotations. MFS Babelfy Ours Ann #1 49.55 41.5 89.05 Ann #2 49.9 41.95 85.95 average 49.72 41.72 87.50 Table 3: Classification F1 scores for MFS, Babelfy and our proposed system by annotator on our manually annotated dataset. Given a similar naming convention between systems and annotators, we report F1 scores of sys-tems' tagging accuracy with respect to the manual annotations. We report F1 averaged over words in Table 3. Our system outperforms both baselines, despite Babelfy having access to a list of predefined word senses. A full by-word table and comprehensive results analysis are in Appendix C. While a 1-to-1 mapping between system clusters and manual senses is optimal, our system sometimes splits senses into smaller clusters, thus annotators will name two system clusters with the same label. Therefore it is also important to report the number of clusters produced by the system comparing to the number of senses after the annotators merged similar clusters. Our system produced 7 . 25 clusters with 2 . 25 clusters on average merged by the annotators. 7 Additionally, in rare cases our system encapsulates a few senses in a single cluster: this happened 3 and 5 times for both annotators across all the dataset. 6 Application to Scientific Corpora A benefit of a WSI approach compared to WSD methods is that it does not rely on a pre-specified sense inventory, and can be applied to any corpus for which a BERT-like model is available. Thus, in addition to the Wikipedia dataset that has been presented throughout the paper, we also automatically induce senses over a corpus of 31 million PubMed Abstracts, 8 using SciBERT (Beltagy et al., 2019). As this dataset is larger than the Wikipedia dump, the process required roughly 145 GPU hours and resulting in 14 , 225 sense-annotated lemmas, with an average number of 2 . 89 senses per lemma. This dataset highlights the data-driven advantages of sense-induction: the algorithm recovers many senses that are science specific and are not represented in the Wikipedia corpora. While performing a wide-scale evaluation of the scientific WSI is beyond our scope in this work, we do show 7 This is partially due to using clusters from two casing ( e.g. bank and Bank ), some of the merges share sense meaning but of different casing. 8 www.nlm.nih.gov/databases/download/pubmed_medline 4743 a few examples to qualitatively demonstrate the kinds of induced senses we get for scientific texts. For each of the words mosaic, race and swine we show the induced clusters and the top-5 cluster representatives for each cluster. mosaic 0 mosaic 1 mosaic 2 mosaic 3 virus partial mixture mixed dwarf chimeric landscape genetic mild congenital combination spatial cmv heterozygous pattern functional stripe mutant matrix cellular While senses mosaic 0 (the common mosaic virus of plants) and mosaic 2 (something resembling a mosaic\", mosaic of..\") are represented in Wikipedia, senses mosaic 1 (the mosaic genetic disorder) and mosaic 3 (mosaic is a quality, e.g., mosaic border, mosaic pattern) are specific to the scientific corpora (The Wikipedia corpora, on the other hand, includes a sense of mosaic as a decorative art-form, which is not represented in Pubmed). race 0 race 1 race 2 race 3 racial exercise class pcr ethnicity run group clone black training state sequence rac competition population rt gender sport genotype ra Senses race 0 (ethnic group), race 1 (competition) and race 2 (population/civilization) are shared with wikipedia, while the sense race 3 (Rapid amplifica-tion of cDNA ends, a technique for obtaining the sequence length of an RNA transcript using reverse transcription (RT) and PCR) is Pubmed-specific. swine 0 swine 1 swine 2 pig seasonal patient porcine avian infant animal influenza group livestock pandemic case goat bird myocardium Here swine 1 captures the Swine Influenza pandemic, while swine 2 refers to swine as experimental Pigs. 7 Sense-aware Information Retrieval An immediate application of a high quality sense-tagged corpus is sense-aware retrieval. We incorporate the sense information in the SPIKE extractive search system (Shlain et al., 2020) 9 for Wikipedia and Pubmed datasets. When entering a search term, suffixing it with @ triggers sense selection allowing 9 spike.apps.allenai.org to narrow the search for the specific sense. Consider a scientist looking for PubMed occurrences of the word swine\" in its influenza meaning. As shown in Figure 3, this can be easily done by writing swine@ and choosing the second item in the resulting popup window.", "The outputs are sentences with the word swine\" in the matching sense. As far as we know, SPIKE is the first system with such WSI capabilities for IR. Similarly, Blloshmi et al. (2021) suggested to enhance IR with sense information, but differently from us, this is done by automatically tagging words with senses from a predefined inventory. 8 Sense-aware Static Embeddings Learning static word embeddings of sense-ambiguous words is a long standing research goal (Reisinger and Mooney, 2010; Huang et al., 2012). There are numerous real-world tasks where context is not available, precluding the use of contextualized-embeddings. These include Outlier Detection (Camacho-Collados and Navigli, 2016; Blair et al., 2016), Term Set Expansion (Roark and Charniak, 2000) the Hypernymy task (Breit et al., 2021), etc. Additionally, static embeddings are substantially more efficient to use, can accommodate larger vocabulary sizes, and can accommodate efficient indexing and retrieval. Yet, despite their flexibility and success, common word embedding methods still represent ambiguous words as a single vector, and suffer from the inability to distinguish between different meanings of a word (Camacho-Collados and Pilehvar, 2018). Using our sense-tagged corpus we suggest a simple and effective method for deriving sense-aware static embeddings: We run an off-the-shelf embedding algorithm, 10 on the corpus where single-token words are replaced with a concatenation of the word and its induced sense ( e.g. I caught a bass.\" becomes I caught@0 a bass@2.\" ). This makes the embedding algorithm learn embeddings for all senses of each word out-of-the-box. 11 An integral property of the embedding algorithm is that it represents both the sense-annotated tokens and the other vocabulary items in the same embedding space 10 We use the CBOW variant of the word2vec algorithm (Mikolov et al., 2013) as implemented in Gensim ( Rehurek and Sojka, 2010). We derive 100-dimensional embeddings using the negative-sampling algorithm and a window size of 5. 11 A similar approach was used by Iacobacci et al. (2015) over a corpus which was labeled with BabelNet and WordNet senses. 4744 Figure 3: User interaction in SPIKE when looking for the word swine\" in its swine flu\" sense. (Unlike the animal/experimental pig senses) this helps sense inferring about words that are represented in the MLM as multi-tokens words (Even though these correspond to less-frequent and often less ambiguous words (Hernndez-Fernndez et al., 2016; Fenk-Oczlon et al., 2010; Zipf, 1945)). For example, in the top-5 nearest neighbours for the different bass senses as shown below, smallmouth and pumpkinseed , multi-token words in BERTLARGE 's vocabulary, are close neighbours the bass instances that correspond to the fish sense. bass 0 bass 1 bass 2 bass 3 bass 4 guitar 0 tuba crappie baritone 0 synth drums 0 trombone 0 smallmouth tenor 0 drum 1 guitar 3 horn 0 pumpkinseed alto 0 synths keyboards 0 flute 0 sunfish bassoon breakbeats keyboard 0 trumpet 0 perch 0 flute 0 trap 4 Note that some neighbours are sense annotated (single-token words that were tagged by our sys-tem), while others are not (multi-token words). For English Wikipedia, we obtain a total vocabulary of 1.4M forms, 90 , 023 of which are sense-annotated. Compared to the community-based representative words, the top neighbours in the embedding space tend to capture members of the same semantic class rather than direct potential replacements. 9 Sense-aware Embeddings Evaluation 9.1 WiC Evaluation Pilehvar and Camacho-Collados (2019) introduced the WiC dataset for the task of classifying word meaning in context. Each instance in WiC has a target word and two contexts in which it appears. The goal is to classify whether the word in the different contexts share the same meaning. e.g. given two contexts: There's a lot of trash on the bed of the river and I keep a glass of water next to my bed when I sleep , our method should return False as the sense of the target word bed is different. Method Acc. JBT (Pelevina et al., 2016) 53.6 Sense-aware Embeddings (this work) 58.3 SW2V* (Mancini et al., 2017) 58.1 DeConf* (Pilehvar and Collier, 2016) 58.7 LessLex* (Colla et al., 2020) 59.2 Table 4: Accuracy scores on the WiC dataset. Systems marked with * make use of external lexical resources. Word Embeddings OPP Acc. GloVe 93.31 65 word2vec 93.31 68 DeConf 93.37 73 Ours (Skip-gram) 96.31 83.5 Ours (CBOW) 96.68 86 Table 5: OPP and Accuracy on the 25-7-1-8 dataset. Our method is the following: Given the sense-aware embeddings, a target word w and two contexts, we calculate the context vector as the average of the context words. The matching sense vector is the closest out of all w embeddings. We then classify the contexts as corresponding to the same meaning if the cosine distance of the found sense embedding is more than threshold apart. We do not use the train set. The threshold is optimized over the development set and fixed to 0 . 68 . This task has a few tracks, we compare our embeddings systems to the best performing methods from the Sense Representations track. Of these, JBT (Pelevina et al., 2016), a lexical embedding method, is the only one that does not use an external lexical resource (induction). The results in Table 4 show accuracy on this task. We outperform the induction method, and are on-par with the lexicon-based methods, despite not using any external lexical resource. 9.2 Evaluation via Outlier Detection Another setup for evaluating word embeddings is that of outlier detection : given a set of words, identify which one does not belong to the set (Blair 4745 et al., 2016). Outlier detection instances are composed of in-group elements and a set of outliers from a related semantic space. In each evaluation round, one outlier is added to the in-group items, and the algorithm is tasked with finding the outlier. Existing outlier detection datasets either did not explicitly target sense-ambiguous words ( 8-8-8 (Camacho-Collados and Navigli, 2016), WikiSem500 (Blair et al., 2016)) or explicitly removed ambiguous words altogether ( 25-8-8-sem (Brink Andersen et al., 2020)). Ambiguity-driven Outlier Detection. We construct a challenge set for outlier detection that specifically targets ambiguous cases. In order to account for sense ambiguity, we add a distractor to each of the in-group sets: the distractor is an item which has multiple senses, where the most salient sense does not belong to the group, while another sense does belong to the group. For example: In-group: zeus, hades, poseidon, aphrodite, ares, athena, artemis Outliers: mercury, odysseus, jesus, sparta, delphi, rome, wrath, atlanta Distractor: nike Here, a model which does not explicitly represent the greek-god sense of nike is likely to place it far away from the in-group instances, causing it to be mistakenly marked as the outlier. The starting point for our dataset is 25-8-8-Sem (Brink Andersen et al., 2020). This dataset contains 25 test groups, each with 8 in-group elements and 8 outliers, resulting in 200 unique test cases. The outliers are sorted in a decreasing degree of relatedness to the in-group elements. In our dataset we replace one of the in-group elements with an ambiguous distractor. For example, in the Greek-gods case above, we replaced the original 8 th item ( hera\" ) with the ambiguous distractor nike .", "12 The dataset consists of 25 groups of 7 non ambiguous group elements, 1 distractor and 8 outliers ( 25-7-1-8 ), similarly resulting 200 unique test cases.", "Method Following Camacho-Collados and Navigli (2016), we rank each word likelihood of being the outlier by the average of all pair-wise semantic similarities of the words in W \\{ w } .", "Therefore if w is an outlier, this score should be low.", "See Appendix D for additional details.", "Metrics Camacho-Collados and Navigli (2016) 12 We additionally changed terms that are debatably ambiguous and changed the African animals\" group to the more general animals\" as no distractors were found.", "proposed evaluating outlier detection using the accuracy (The fraction of correctly classified outliers among the total cases) and Outlier Position Percentage (OPP) metric.", "OPP indicates how close outliers are to being classified correctly: OP P = (cid:80) W D OP ( W ) | W | 1 | D | 100 where OP ( W ) is the position of the outlier according to the algorithm.", "Results In Table 5 we report performance of on the 25-7-1-8 set.", "Word2vec and GloVe accuracy scores are low while having high OPP scores.", "This is the expected behaviour for embeddings without sense awareness.", "These will position the distractor and the outlier furthest away from the group items while not designed to make the hard decision required for high Accuracy.", "Our sense-aware embeddings strongly outperform GloVe and word2vec which do not include senses.", "Our embeddings also outperform the word embeddings proposed in DeConf (Pilehvar and Collier, 2016), which are the best performing sense embeddings on WiC which are also publicly available.", "We show that substitution-based word-sense induction algorithms based on word-substitutions derived from MLMs are easily scalable to large corpora and vocabulary sizes, allowing to efficiently obtain high-quality sense annotated corpora.", "We demonstrate the utility of such large-scale sense annotation, both in the context of a scientific search application, and for deriving high-quality sense-aware static word embeddings.", "As a secondary contribution, we also develop a new variant of the Outlier Detection evaluation task, which explicitly targets ambiguous words.", "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEX-TRACT)." ]
[ "method", "abstain", "result", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "other", "objective", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "result", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "other" ]
[ "Despite recent progress in conversational question answering, most prior work does not focus on follow-up questions.", "Practical conversational question answering systems often receive follow-up questions in an ongoing conversation, and it is crucial for a system to be able to determine whether a question is a follow-up question of the current conversation, for more effective answer finding subsequently.", "In this paper, we introduce a new follow-up question identification task.", "We propose a three-way attentive pooling network that determines the suitability of a follow-up question by capturing pair-wise interactions between the associated passage, the conversation history, and a candidate follow-up question.", "It enables the model to capture topic continuity and topic shift while scoring a particular candidate follow-up question.", "Experiments show that our proposed three-way attentive pooling network outperforms all baseline systems by significant margins.", "Conversational question answering (QA) mimics the process of natural human-to-human conversation.", "Recently, conversational QA has gained much attention, where a system needs to answer a series of interrelated questions from an associated text passage or a structured knowledge graph (Choi et al., 2018; Reddy et al., 2019; Saha et al., 2018).", "However, most conversational QA tasks do not explicitly focus on requiring a model to identify the follow-up questions.", "A practical conversational QA system must possess the ability to understand the conversation history well, and to identify whether the current question is a follow-up of that particular conversation.", "Consider a user who is trying to have a conversation with a machine (e.g., Siri, Google Home, Alexa, Cortana, etc).", "First, the user asks a question and the machine answers it.", "When Passage : ...script for Verhoeven's first American film, Flesh and Blood (1985), which starred Rutger Hauer and Jennifer Jason Leigh.", "Verhoeven moved to Hollywood for a wider range of opportunities in filmmak-ing.", "Working in the U.S. he made a serious change in style, directing big-budget, very violent, special-effects-heavy smashes RoboCop and Total Recall.", "RoboCop, for ...", "Verhoeven followed those successes with the equally intense and provocative Basic Instinct (1992) ... received two Academy Awards nominations, for Film Editing and for Original Music ...", "Conversation history : Q: What was the first film Verhoeven did in the US?", "A: Flesh and Blood Q: What genre of films did he make?", "A: big-budget, very violent, special-effects-heavy smashes Candidate follow-up question examples : What year did his first film debut?", "Valid Did he make any films during his final years?", "Invalid What did she do after her debut film?", "Invalid Figure 1: Examples illustrating the follow-up question identification task.", "the user asks the second question, it is very important for the machine to understand whether it is a follow-up of the first question and its answer.", "Further, this needs to be determined for every question posed by the user in that ongoing conversation.", "By identifying whether the question is a follow-up question, a machine determines whether the conversation history is relevant to the question.", "Based on this decision, it is expected to use a suitable answer finding strategy for answering the question.", "Additionally, a QA system first retrieves some relevant documents using an information retrieval (IR) engine to answer a question.", "If a follow-up question identifier predicts the question as an invalid followup question given the retrieved documents, it can communicate to the IR engine to retrieve additional supporting documents.", "A few example instances are given in Figure 1 to illustrate the follow-up question identification task in a conversational reading comprehension setting.", "We present a new dataset for learning to identify follow-up questions, namely LIF .", "Given a text passage as knowledge and a series of question-answer pairs as conversation history, it requires a model to identify whether a candidate follow-up question is valid or invalid.", "The proposed dataset requires a model to understand both topic continuity and topic shift to correctly identify a follow-up question.", "For instance, in the first example given in Figure 1, a model needs to capture the topic continuity from the first question-answer pair (i.e., first film is Flesh and Blood ) and the topic shift from the second question-answer pair (i.e., genre of films ) of the conversation history.", "The candidate followup question in the second example is invalid since the associated passage does not provide any information about his final years .", "The last follow-up question example is invalid since Verhoeven is a he , not she .", "There has been some research in the past which focuses on identifying what part of the conversation history is important for processing follow-up questions (Bertomeu et al., 2006; Kirschner and Bernardi, 2007).", "However, the recently proposed neural network-based models for conversational QA have not explicitly focused on follow-up questions.", "In this paper, we propose a three-way attentive pooling network for follow-up question identification in a conversational reading comprehension setting.", "It evaluates each candidate follow-up question based on two perspectives topic shift and topic continuity.", "The proposed model makes use of two attention matrices, which are conditioned over the associated passage, to capture topic shift in a follow-up question.", "It also relies on another attention matrix to capture topic continuity, directly from the previous question-answer pairs in the conversation history.", "For comparison, we have developed several strong baseline systems for follow-up question identification.", "1. We propose a new task for follow-up question identification in a conversational reading comprehension setting which supports automatic evaluation.", "2. We present a new dataset, namely LIF, which is derived from the recently released conversational QA dataset QuAC (Choi et al., 2018).", "3. We propose a three-way attentive pooling network which aims to capture topic shift and topic continuity for follow-up question identification.", "Given a passage, a sequence of question-answer pairs in a conversation history, and a candidate follow-up question, the task is to identify whether or not the candidate follow-up question is a valid follow-up question.", "We denote the passage as P which consists of T tokens.", "Let the sequence of previous questions and their corresponding answers be denoted as {Q 1 , Q 2 , . . . , QM } and {A 1 , A 2 , . . . , AM } , where M is the number of previous question-answer pairs in the conversation history.", "The candidate follow-up question is denoted as C .", "We formulate this task as a binary classification task, which is to classify C as valid or invalid .", "In the remainder of this paper, we denote the length of the candidate follow-up question as V .", "In our model, we concatenate all previous questions and their answers with special separator tokens as follows: Q 1 | A 1 || Q 2 | A 2 || . . . || QM | AM .", "The combined length of the previous question-answer pairs in the conversation history is denoted as U .", "We rely on the QuAC dataset (Choi et al., 2018) to prepare the LIF dataset.", "Each question in the QuAC dataset is assigned one of three categories: should ask , could ask , or should not ask a followup question.", "We construct the valid instances of the dataset using the should ask follow-up question instances.", "Since the test set of QuAC is hidden, we split the QuAC development set into two halves to generate the development set and the test set of LIF.", "The split is done at the passage level to ensure that there is no overlap in the passages used in the development and test set.", "To create each instance in LIF from QuAC, we take the associated passage, the previous question-answer pairs till it says should ask a follow-up question, and the next question as the gold valid candidate follow-up question.", "For each instance, we sample invalid follow-up questions from two sources:", "1. Questions from other conversations in QuAC which can serve as potential distractors, and", "2. Non-follow-up questions from the same conversation in QuAC which occurs after the gold valid follow-up question.", "The sampling from the first source involves a two-step filtering process.", "We first compare the cosine similarity between the associated passage and all the questions from the other conversations by using embeddings generated by InferSent (Conneau et al., 2017).", "We take the top 200 questions based on higher similarity scores.", "In the second step, we concatenate the gold valid candidate follow-up question with the question-answer pairs in the conversation history to form an augmented follow-up question.", "Then, we calculate the token overlap count between each ranked question obtained in the first step and the augmented follow-up question.", "We normalize the token overlap count by dividing it by the length of the ranked question (after removing stop words).", "For each valid instance, we fix a threshold and take at least one but up to two questions with the highest normalized token overlap count as invalid candidate follow-up questions.", "We also introduce potential distractors from the same conversation in QuAC.", "We check through the remaining question-answer pairs which occur after the valid follow-up question.", "We tag a question as an invalid candidate if the question appears just before it is labeled with should not ask a follow-up question.", "Throughout the invalid question sampling process, we exclude generic follow-up questions containing keywords such as what else , any other , interesting aspects and so on, to avoid selecting follow-up questions which can be potentially valid (e.g., Any other insteresting aspects about this article? ).", "For the training and the development sets, we combine all candidate follow-up questions from both other conversations and the same conversation.", "We keep three test sets with candidates from different sources: from both other conversations and the same conversation ( Test-I ), from other conversations only ( Test-II ), and from the same conversation only ( Test-III ).", "The overall dataset statistics are given in Table", "1. We randomly sampled 100 invalid follow-up questions from Test-I set, and manually checked them.", "We verified that 97% of them are truly invalid .", "The model is required to identify whether the subject of the question is the same as in the associated passage or in the conversation history, which is often distracted by the introduction of pronouns (e.g., I , he , she ) and possessive pronouns (e.g., my , his , her ).", "Such resolution of pronouns is a critical aspect while determining the validity of a follow-up question.", "It also needs to examine whether the actions and the characteristics of the subject described in the candidate follow-up question can be logically inferred from the associated passage or the conversation history.", "Moreover, capturing topic continuity and topic shift is necessary to determine the validity of a follow-up question.", "The subjects and their actions or characteristics in the invalid follow-up questions are often mentioned in the passages, but associated with different topics.", "We randomly sampled 100 invalid instances from the Test-I set, and manually analyzed them based on different properties as given in Table", "2. We found that 35% of the invalid questions have identical topics as the associated passages, 42% of the questions require pronoun resolution, 11% of the questions have the same subject entity as the gold follow-up question, and 5% of the questions have the same subject entity as the last question in the conversation history.", "Pronouns in 8% of the invalid questions match the pronouns in the corresponding valid follow-up questions, and match the last question in the conversation history for another 8% of the cases.", "For 7% of the cases, the question types are the same as the valid questions, and for 6% of the cases they are the same as the last question in the conversation history.", "We also observed that 4% of the invalid questions mention the same actions as in the corresponding valid ones, and they are the same as the last question in the conversation Properties % Example Identicaltopic 35 P : ... the band released their second album ... Q : Is A Rush of Blood to the Head their album name?", "In this section, we describe our proposed three-way attentive pooling network 1 .", "First, we apply an embedding layer to the associated passage, the conversation history, and the candidate follow-up question.", "Further, they are encoded to derive sequence-level encoding vectors.", "Then the proposed three-way attentive pooling network is applied to score each candidate follow-up question.", "We use both character and word embeddings 2 .", "Similar to Kim (2014), we obtain the character-level 1 The source code and data are released at https:// github.com/nusnlp/LIF 2 We also experimented with ELMO and BERT but did not observe any consistent improvement.", "embedding using convolutional neural networks (CNN).", "First, characters are embedded as vectors using a character-based lookup table, which are fed to a CNN, and whose size is the input channel size of the CNN.", "Then the CNN outputs are max-pooled over the entire width to obtain a fixed-size vector for each token.", "We use pre-trained vectors from GloVe (Pennington et al., 2014) to obtain a fixed-length word embedding vector for each token.", "Finally, both word and character embeddings are concatenated to obtain the final embeddings.", "For encoding the conversation history and the candidate follow-up question, we use bidirectional LSTMs (Hochreiter and Schmidhuber, 1997).", "We represent the sequence-level encoding of the conversation history and the candidate follow-up question as Q RU H and C RV H , respectively, where H is the number of hidden units.", "Similarly, we compute the sequence-level passage encoding, resulting in D RT H .", "Then a similarity matrix A RT U is derived, where A = D Q (cid:62) .", "We then jointly encode the passage and the conversation history.", "We apply a row-wise softmax function on A to obtain R RT U .", "Now, for all the passage words, the aggregated representation of the conversation history is given as G = RQ RT H .", "The aggregated vectors corresponding to the passage words in G are then concatenated with the passage vectors in D , followed by another BiLSTM to obtain a joint representation V RT H .", "In addition, multi-factor self-attentive encoding (Kundu and Ng, 2018) is applied on the joint representation.", "If m represents the number of factors, multi-factor attention F [1: m ] RT m T is formulated as: F [1: m ] = VW [1: m ] f V (cid:62) (1) where W [1: m ] f RH m H is a 3-way tensor.", "A max-pooling operation is performed on F [1: m ] , over the number of factors, resulting in the self-attention matrix F RT T .", "We normalize F by applying a row-wise softmax function, resulting in F RT T .", "Now the self-attentive encoding can be given as M = FV RT H .", "The self-attentive encoding vectors are then concatenated with the joint encoding vectors, and a feed-forward neural network-based gating is applied to control the overall impact, resulting in Y RT 2 H .", "The final passage encoding P RT H is obtained by applying another BiLSTM layer on Y .", "Now, we use our proposed three-way attentive pooling network to score every candidate follow-up question.", "The architecture of the network is depicted in Figure", "2. Attentive pooling (AP) was first proposed by dos Santos et al. (2016) and successfully used for the answer sentence selection task.", "AP is essentially an attention mechanism that enables joint learning of the representations of a pair of inputs as well as their similarity measurement.", "The primary idea is to project the paired inputs into a common representation space to compare them more plausibly even if both inputs are not semantically comparable, such as a question-answer pair.", "In this paper, we extend the idea of attentive pooling network to the proposed three-way attentive pooling network for the follow-up question identification task, where the model needs to capture the suitability of a candidate follow-up question by comparing with the conversation history and the associated passage.", "In particular, the proposed model aims to capture topic shift and topic continuation in the follow-up question.", "dos Santos et al. (2016) used a single attention matrix to compare a pair of inputs.", "In contrast, our proposed model relies on three attention matrices, where the two additional attention matrices make use of the associated passage.", "Moreover, our proposed model is developed to deal with a more complex follow-up question identification task, in contrast to the proposed model in dos Santos et al. (2016).", "We score each candidate follow-up question based on its relevance to the conversation history in two different perspectives: (1) considering the associated passage (i.e., knowledge) and (2) without considering the passage.", "In this step, we compute three different attention matrices for capturing the similarity between the conversation history and the candidate follow-up question two matrices when the associated passage is taken into consideration, and another one when the passage is not considered.", "The attention matrix A q,p RT U , which captures the token-wise contextual similarity between the conversation history and the passage, is given as: A q,p = f attn ( Q , P ) , (2) where the f attn ( . ) function can be written as f attn ( Q , P ) = P Q (cid:62) .", "Intuitively, A q,p ( i, j ) captures the contextual similarity score between the i -th token in the passage (i.e., i -th row of P ) and the j -th token in the conversation history (i.e., j -th row of Q ).", "Similarly, the attention matrix A c,p RT V , which captures the contextual similarity of a candidate follow-up question and the associated passage, is given as: A c,p = f attn ( C , P ) (3) Note that, A q,p and A c,p will be used jointly to capture the similarity between Q and C , given P .", "The attention matrix A c,q RU V , which captures the similarity between a candidate follow-up question and the conversation history without considering the associated passage, is given as: A c,q = f attn ( C , Q ) (4) Attention Pooling After obtaining the attention matrices, we apply column-wise or row-wise max-pooling.", "When the associated passage is considered to capture the similarity between the conversation history and the candidate follow-up question, we perform columnwise max-pooling over A q,p and A c,p , followed by normalization with softmax, resulting in r qp RU and r cp RV , respectively.", "For instance, r qp is given as ( 1 i U ): r qp = softmax ( . . . , max 1 j T [ A q,p ( j, i )] , . . . ) (5) Intuitively, the i -th element in r qp represents the relative importance score of the contextual encoding of the i -th token in the conversation history with respect to the passage encoding vectors.", "Every element of r cp can be interpreted in the same fashion.", "When the associated passage encoding is not considered, we perform both row-wise and column-wise max-pooling over A c,q to generate r qc RU and r cq RV , respectively.", "In this step, we score each candidate follow-up question.", "Each candidate C is scored based on two perspectives with and without consideration of Figure 2: Architecture of the three-way attentive pooling network.", "the associated passage encoding P : score ( C ) = s 1 + s 2 = f sim ( C , Q | P ) + f sim ( C , Q ) , (6) where C is the encoding of C .", "The similarity function f sim ( C , Q | P ) = xy (cid:62) , where x = r qp Q RH and y = r cp C RH .", "The other similarity function f sim ( C , Q ) = m n (cid:62) , where m = r qc Q RH and n = r cq C RH .", "We use binary cross entropy loss for training the model.", "For prediction, we find a threshold to maximize the scores on the development set.", "For the test instances, we use the threshold to predict whether a follow-up question is valid or invalid.", "We develop several rule-based, statistical machine learning, and neural baseline models.", "For all the models, a threshold is determined based on the best performance on the development set.", "question and the passage, and between the candidate follow-up question and the conversation history.", "We normalize the count values based on the length of the candidate follow-up question.", "Next, we develop two models based on the contextual similarity scores using InferSent sentence embeddings (Conneau et al., 2017).", "The two models compare the candidate follow-up question with the associated passage and the conversation history, respectively.", "The similarity scores are computed based on vector cosine similarity.", "We also develop another rule-based model using tf-idf weighted token overlap scores.", "We prepend the last question from the conversation history to the candidate follow-up question and add the tf-idf of overlapping words between the concatenated context and the passage.", "We handcraft two sets of features for the statistical machine learning models.", "One set of features consists of tf-idf weighted GloVe vectors.", "Since we adopt 300 dimensional GloVe vectors in our experiments, these features are of dimension 300.", "Another set of features consists of word overlap counts.", "We compute the pairwise word overlap counts among the candidate follow-up question, the associated passage, and the conversation history.", "The overlap count-based features are of dimension", "3. We experiment with logistic regression using the derived features.", "We also develop several neural baseline models.", "We first concatenate the associated passage, the conversation history, and the candidate follow-up question, followed by embedding (the same as described earlier).", "Then, we apply sequence-level encoding with either BiLSTM or CNN.", "For CNN, we use equal numbers of unigram, bigram, and trigram filters, and the outputs are concatenated to obtain the final encoding.", "Next, we apply either global max-pooling or attentive pooling to obtain an aggregated vector representation, followed by a feed-forward layer to score the candidate follow-up question.", "Let the sequence encoding of the concatenated text be E RL H , and e t be the t th row of E .", "The aggregated vector e RH for attentive-pooling can be obtained as: a t exp ( e t w (cid:62) ) ; e = a E , (7) where w RH is a learnable vector.", "We also develop a baseline model using BERT (Devlin et al., 2019).", "We first concatenate all the inputs and then apply BERT to derive the contextual vectors.", "Next, we aggregate them into a single vector using attention.", "Then a feed-forward layer is used to score each candidate follow-up question.", "In this section, we present the experimental settings, results, and performance analysis.", "We do not update the GloVe vectors during training.", "We use 100-dimension character-level embedding vectors.", "The number of hidden units in all the LSTMs is 150 ( H = 300 ).", "We use dropout (Srivas-tava et al., 2014) with probability 0.3.", "Following Kundu and Ng (2018), we set the number of factors as 4 in multi-factor attentive encoding.", "We use the Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001 and clipnorm", "5. Following Choi et al. (2018), we consider at most 3 previous question-answer pairs in the conversation history.", "This being a binary classification task, we use precision, recall, F1, and macro F1 as evaluation metrics.", "All scores reported in this paper are in %.", "Table 3 shows that our proposed model outperforms the competing baseline models by significant margins across all test sets.", "We perform statistical significance tests using paired t-test and bootstrap resampling.", "Performance of our proposed model is significantly better ( p < 0 . 01 ) than the best baseline system which provides the highest Macro-F1 score on Test-I.", "The LSTM-based neural baselines perform better than the rule-based and statistical machine learning models in most cases.", "On Test-III, the statistical models tend to predict valid , and the number of valid instances is much higher than the invalid instances (about 75%:25%), resulting Model V-P V-R V-F1 Macro F1 History 72.7 67.0 69.7 77.9 Knowledge 75.8 73.8 74.8 81.4 A c,q 71.8 75.8 73.7 80.2 Multi-factor Attn 75.6 76.4 76.0 82.1 Joint encoding 75.3 76.6 76.0 82.1 Char embedding 74.2 72.3 73.2 80.2 Three-way AP 76.2 77.3 76.8 82.7 Table 4: An ablation study on the development set.", "in high Valid F1 scores.", "These baseline systems (while performing well on valid questions) perform poorly when evaluated using Macro F1 which measures performance on both valid and invalid follow up questions.", "Macro F1 is the overall evaluation metric used to compare all systems.", "Overall, identifying follow-up questions from the same conversation (Test-III) is harder compared to other conversations (Test-II).", "We perform an ablation study as shown in Table", "4. The proposed model performs worst when we do not consider the conversation history.", "This is because the question-answer pairs in the conversation history help to determine topic continuity while identifying a valid follow-up question.", "The performance also drops when we do not consider the associated passage (i.e., knowledge) because it helps to capture topic shift.", "The performance also degrades when we remove A c,q .", "It performs better than the model where we do not consider the conversation history at all, as the conversation history is taken into consideration in passage encoding.", "The performance also degrades when we remove other components such as multi-factor attentive encoding, joint encoding, and character embedding.", "The proposed model aims to capture topic continuity and topic shift by using a three-way attentive pooling network.", "Attention pooling on A q,p and A c,p aims to capture topic shift in the follow-up question for a given conversation history.", "Consider the first example in Table", "5. When we do not consider the passage, it could not identify the follow-up question correctly while our proposed model correctly identifies the topic shift to the duration of the riot by validating with the passage words after four days and restore order and take back the prison on September 13 .", "In the second example, while our model could correctly identify topic continuity through Schuur , the model without history fails to identify the follow-up question.", "We performed an error analysis where our proposed model failed to identify the follow-up questions.", "We randomly sampled 50 such instances (25 valid and 25 invalid ) from the development set.", "We found that 32% of them require pronoun resolution for the subject in the follow-up questions.", "38% of the instances require validation of the ac-tions/characteristics of the subjects (e.g., did they have any children? vs. gave birth to her daughter ).", "14% of the errors occur when it requires matching objects or predicates which occur in different forms (e.g., hatred vs hate , television vs TV ).", "For the remaining 16% of the cases, it could not correctly capture the topic shift.", "Many data-driven machine learning methods have been shown to be effective for tasks relevant for dialog such as dialog policy learning (Young et al., 2013), dialog state tracking (Henderson et al., 2013; Williams et al., 2013; Kim et al., 2016), and natural language generation (Sordoni et al., 2015; Li et al., 2016; Bordes et al., 2017).", "Most of the recent dialog systems are either not goal oriented (e.g., simple chit-chat bots), or domain-specific if they are goal oriented (e.g., IT help desk).", "In the last few years, there has been a surge of interest in conversational question answering.", "Saha et al. (2018) released a Complex Sequential Question Answering (CSQA) dataset for learning conversations through a series of interrelated QA pairs by inferencing over a knowledge graph.", "Choi et al. (2018) released a large-scale conversational QA dataset, namely question answering in context (QuAC), which mimics a student-teacher interactive scenario.", "Reddy et al. (2019) released the CoQA dataset and many systems were evaluated on it.", "Zhu et al. (2018) proposed SDNet to fuse context into traditional reading comprehension models.", "Huang et al. (2019) proposed a Flow mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure.", "In a conversation setting, given the previous QA pairs as conversation history, while these models focus on answering the next question, our work is focused on identifying follow-up questions.", "Recently, Saeidi et al. (2018) proposed a dataset for regulatory texts that requires a model to ask follow-up clarification questions.", "However, the answers are limited to yes or no , which makes the task rather Passage : On September 9, 1971, prisoners at the state penitentiary at Attica, NY, took control of a cell block and seized thirty-nine correctional officers as hostages.", "restrictive.", "Moreover, while Saeidi et al. (2018) focuses on generating a clarification question in response to a question of a conversation, we focus on identifying whether a question is a follow-up question of a conversation.", "In this paper, we present a new follow-up question identification task in a conversational setting.", "We developed a dataset, namely LIF, which is derived from the previously released QuAC dataset.", "Notably, the proposed dataset supports automatic evaluation.", "We proposed a novel three-way attentive pooling network which identifies whether a follow-up question is valid or invalid by considering the associated knowledge in a passage and the conversation history.", "Additionally, we developed several strong baseline systems, and showed that our proposed three-way attentive pooling network outperforms all the baseline systems.", "Incorporating our three-way attentive pooling network into open domain conversational QA systems will be interesting future work.", "This research is supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: AISG-RP-2018-007)." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "objective", "objective", "abstain", "objective", "objective", "abstain", "other" ]
[ "Most NLP models today treat language as universal, even though socioand psycholin-gustic research shows that the communicated message is influenced by the characteristics of the speaker as well as the target audience.", "This paper surveys the landscape of personalization in natural language processing and related fields, and offers a path forward to mitigate the decades of deviation of the NLP tools from sociolingustic findings, allowing to flex-ibly process the natural language of each user rather than enforcing a uniform NLP treatment.", "It outlines a possible direction to incorporate these aspects into neural NLP models by means of socially contextual personalization, and proposes to shift the focus of our evaluation strategies accordingly.", "Our language is influenced by one's individual characteristics as well as by the affinity to various sociodemographic groups (Bucholtz and Hall, 2005; McPherson et al., 2001; Eckert and McConnell-Ginet, 2013).", "Yet the majority of NLP models today treats language as universal, acknowledging that words have different meanings in different semantic context, but typically assuming that this context has the same meaning for everyone.", "In this paper, I propose that our focus shifts towards interpreting the language together with its user-dependent, contextual personal and social aspects, in order to truly process the natural language of a user.", "I outline a possible direction to incorporate these aspects into neural NLP models, and suggest to adjust our evaluation strategies.", "The paper is structured with the following aims in mind: Sec. 2 provides historical context, seeking evidence on personalization needs.", "Sec. 3 reviews existing personalization work, as the personalization efforts and success stories are scattered across contributions to various applied tasks.", "Sec. 4 contemplates on how NLP personalization could be adopted as a process of several stages.", "Sec. 5 outlines an implementation proposal on contextually personalized classification models, building upon flexible, socially conditioned user representations.", "Sec. 6 proposes novel evaluation approaches reflecting the benefit of personalized models.", "Finally, Sec. 7 opens the discussion on ethical aspects, non-personalizable NLP tasks, and the role of industry in personal data collection and protection.", "Since 1990s, with the rise of so-called empirical or statistical NLP area (Manning et al., 1999; Brill and Mooney, 1997), the focus on frequently appearing phenomena in large textual data sets unavoidably led to NLP tools supporting standard English for generic needs of an anonymous user.", "An NLP tool whether e.g. a POS tagger, dependency parser, machine translation model or a topic classifier was typically provided as one trained model for one language (Toutanova et al., 2003; Klein and Manning, 2003; Morton et al., 2005), or, later on, for major underperforming domains, such as Twitter (Gimpel et al., 2011).", "However, enforcing artificial domain boundaries is suboptimal (Eisenstein, 2013).", "Neglecting the variety of users and use cases doesn't make the tools universally applicable with the same performance it only makes our community blind to the built-in bias towards the specifics of user profiles in training data (Hovy, 2015; Tatman, 2017).", "Meanwhile, in the information retrieval area, personalization has been incorporated from the early days it is a long accepted paradigm that different users with different information needs might search for that need using the same query (Verhoeff et al., 1961) and that individual information needs evolve (Taylor, 1968).", "With the rising popularity of search engines in 1990s, the need for personalization in the interpretation of the query becomes obvious (Wilson, 1999).", "Exploiting logs of user search interactions allowed personalization at scale (Carbonell and Goldstein, 1998; Sanderson and Croft, 2012).", "In 2000s, it became acceptable to personalize search results using implicit information about user's interests and activities, e.g. leveraging browsing history or even e-mail conversations (Teevan et al., 2005; Dou et al., 2007; Matthijs and Radlinski, 2011).", "Today, hardly any of us can imagine that searching e.g. for pizzeria from our cell phone would return the same list of results for everyone no matter our location.", "The area of recommendation systems has followed the IR trends, with more emphasis on the social than the personal component.", "Already early GroupLens Usenet experiments (Miller et al., 1997; Resnick et al., 1994) have shown the effectiveness of personalized article recommendations via collaborative filtering.", "Acknowledging the potential of personalizing via similar or related users, the focus moved towards exploiting information from user's social networks (Guy et al., 2010; De Francisci Morales et al., 2012; Guy et al., 2009).", "Similar developments are emerging for example in the area of personalized language models (Ji et al., 2019; Wen et al., 2012; Yoon et al., 2017; McMahan et al., 2017), which are largely used e.g. in predictive writing, and in natural language generation (Oraby et al., 2018; Harrison et al., 2019), aiming e.g. at selecting and preserving a consistent personality and style within a discourse.", "Drawing inspiration from these areas, I argue it is natural for users to expect personalized approaches when an NLP system attempts to interpret their language, i.e., attempts to assign any label to a provided text segment, whether it is, e.g., a sentiment of their sentence, a part-of-speech of a word they used, a sense definition from a knowledge base, or even a translation.", "As I discuss in the following section, already basic personal information has been shown to be relevant for the system accuracy.", "Inferring user traits We adjust our language with respect to the sociodemographic group we feel related to (McPherson et al., 2001; Bucholtz and Hall, 2005; Holmes and Meyerhoff, 2008; Eckert, 2012).", "This language adjustment can be, in turn, used in NLP algorithms to infer a range of individual user traits.", "Experiments have been conducted with estimating variables such as age (Rao et al., 2010; Nguyen et al., 2011), gender (Burger et al., 2011; Bamman et al., 2014; Sap et al., 2014), ge-olocation (Eisenstein et al., 2010), political preferences (Volkova et al., 2014), socio-economic status (Preotiuc-Pietro et al., 2015), impact (Lampos et al., 2014), and a range of psychological traits and issues (Schwartz et al., 2013; Park et al., 2015; Sumner et al., 2012; Guntuku et al., 2017; Coppersmith et al., 2014).", "While most of the above-listed experiments have been conducted on Twitter, a variety of other datasets have been used, including phone conversations (Mairesse et al., 2007; Ivanov et al., 2011), blogs (Mukherjee and Liu, 2010; Schler et al., 2006), Facebook (Markovikj et al., 2013), or YouTube (Filippova, 2012).", "Human judges show surprisingly inferior performance on user profiling tasks, grounding their judgement in topical stereotypes (Carpenter et al., 2017).", "However, albeit more accurate thanks to capturing stylistic variation elements, statistical models are prone to stereotype propagation as well (Costa-juss`a et al., 2019; Koolen and van Cranenburgh, 2017).", "While many experiments have been conducted using discrete variables for demographics and personality, real-valued continuous representation are preferable (Lynn et al., 2017).", "Numerous researchers have been pointing out that it would be more meaningful to create models building on re-cent developments in sociolinguistics, i.e. treating demographic variables as fluid and social, e.g. modeling what influences speakers to show more or less of their identity through language, or jointly modeling variation between and within speakers (Eckert and McConnell-Ginet, 2013; Nguyen et al., 2014; Bamman et al., 2014; Eisenstein, 2013).", "Improving NLP tasks with user traits Actively accounting for sociodemographic factors in text classification models leads to improved performance across NLP applications.", "So far, such studies have being conducted most prominently for English language, using age and gender variables, with the most focus on sentiment analysis tasks (Volkova et al., 2013; Hovy, 2015; Lynn et al., 2017; Yang and Eisenstein, 2017).", "Other explored tasks include topic detection, part-of-speech tagging (Hovy, 2015), prepositional phrase attachment, sarcasm detection (Lynn et al., 2017), fake news detection (Long et al., 2017; Potthast et al., 2018), or detection of mental health issues (Benton et al., 2016).", "Apart from demographic variables, personality traits play a role as well e.g. in stance detection (Lynn et al., 2017), sarcasm detection, opinion change prediction (Lukin et al., 2017), prediction of regional life satisfaction or mortality rate (Zamani et al., 2018).", "NLP models can also improve by exploiting user's past context and prior beliefs, e.g. for sarcasm (Bamman and Smith, 2015), stance prediction (Sasaki et al., 2018), persuasion (Durmus and Cardie, 2018) or conversation re-entry (Zeng et al., 2019).", "Methods used to incorporate the social and psychological variables to models are discussed in Sec. 5.", "Improving NLP tasks with social graphs An emerging line of research makes use of social interactions to derive information about the user representing each user as a node in a social graph and creating low dimensional user embeddings induced by neural architecture (Grover and Leskovec, 2016; Qiu et al., 2018).", "Including network information improves performance on profiling tasks such as predicting user gender (Farnadi et al., 2018) or occupation (Pan et al., 2019), as well as on detecting online behavior such as cyberbullying (Mathur et al., 2018), abusive language use (Qian et al., 2018; Mishra et al., 2018) or suicide ideation (Mishra et al., 2019).", "From the user experience perspective, personalization of NLP tools could be divided into three steps.", "Explicit input.", "In the first step, user is allowed to provide personal information for the NLP components explicitly.", "The depth of information provided can vary from specifying own age to taking personality questionnaires.", "This user behavior is somewhat similar to subscribing to topics of interest for personalized newsletters user has a full control over the level of customization.", "However, results of increasing the burden on the user can be inferior to implicit inference (Teevan et al., 2005).", "Implicit inference.", "More conveniently, personal information about the user can be inferred implicitly by the system, as demonstrated e.g. by the models discussed in section 3.", "The result of such inference can be either a set of explicit labels, or latent user representation capturing similar information in a larger number of data-driven dimensions.", "For the user, such personalization might currently feel intrusive in the context of an NLP system, however, in many related research areas the user expectations are already altered (cf. Sec. 2).", "Contextualized implicit inference.", "In the third step, personalization includes also an intrauser modeling of different individual contexts based on user's communication goals.", "This reflects the social science argument that an identity is the product rather than the source of linguistic and other semiotic practices, and identities are relationally constructed through several, often overlapping, aspects of the relationship between self and other, including similarity/difference, genuineness/artifice and authority/delegitimacy (Bucholtz and Hall, 2005).", "This approach is also aligned with NLP findings on social power in dialogue (Bracewell et al., 2012; Bramsen et al., 2011; Prabhakaran et al., 2012).", "Such solution can be perceived less invasive by the users, as the contextual adaptation may diminish the otherwise built-in stereotypes of language use (e.g. some users may prefer to use more emotionally charged words in private social contexts, but not necessarily in professional conversations).", "Early experiments used basic demographic variables directly as input features in the model (Volkova et al., 2013).", "Hovy (2015) uses age and gender as modifying factors for the input word embeddings.", "In a similar manner, Lynn et al. (2017) uses a multiplicative compositional function to combine continuous user trait scores, inferred via factor analysis, with original feature values, augmenting the feature set so that each feature exists with and without the trait information integrated.", "Benton et al. (2017) use age and gender as auxiliary tasks in a multitask learning setup for psychological labeling of users.", "Zamani and Schwartz (2017) apply a residualized control approach for their task, training a language model over the prediction errors of the model trained on sociodemographic variables only.", "Later they combine it with the factor analysis approach (Zamani et al., 2018).", "Benton et al. (2016) learns user representations by encoding user's social network as a vector, where users with similar social networks have similar vector representations.", "A commonly used technique is to define the con-text for each node, for example by random walks, and train a predictive model to perform context prediction.Similar network-based learning is employed in node2vec (Grover and Leskovec, 2016).", "Yang and Eisenstein (2017) propose to use neural attention mechanisms in a social graph over followers, mentions and retweets, to leverage linguistic homophily.", "However, the user modeling approaches discussed so far focus on finding one representation for one user.", "A modern, personalized NLP system shall be able to capture not only the inherent semantic aspects of the analyzed discourse together with the latent vectorial representations of user characteristics, but also contextual user profiles based on an identity sought in their current social microenvironment.", "A strengthened industry-academia cooperation is crucial in such data collection (more on this in Sec. 7).", "Assuming the access to a larger online history of each user, we could draw a parallel to the design of the contextual word embeddings (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019), which train neural networks as language models, then use the context vectors provided for each word token as pretrained word vectors.", "With an increasing number of online corpora containing user metadata, we can use recurrent or attentive neural networks to create large-scale social representations of users in a similar manner, allowing multiple pretrained senses of each user identity vector representations of user conversational styles, opinions, interests, etc., treating those representations as dynamically changing in different social contexts.", "These representations can be then matched to new users based on the sparse linguistic, sociodemographic, psychological, and network information available, and fine-tuned on the context of a given task in a given social microenvironment, e.g. based on the stable part of the personal vectorial representation of the other users present in the conversation.", "Currently, most of the NLP ground truth exists in the vacuum, for everyone.", "Our systems typically use labels obtained as an average or majority vote provided by a number of impersonated annotators, even for tasks where they highly disagree (Waseem, 2016; Stab and Gurevych, 2014).", "As pointed out in Bender and Friedman (2018), we rarely get to know anything about the people other than if they were expert 1 .", "If we truly aim at personalizing NLP systems, the first step is understanding who the recipients of our system decisions are.", "In contrast to 1 read: undergrad students vs. lab colleagues IR, where the user of the interpreted result is normally the author of the query, in NLP the use cases vary.", "For example, rather than merely labeling a piece of text as a sarcasm, we shall ask (A) Did the author mean this statement as sarcasm?", "(B) Was this understood by others as sarcasm?", "What kind of users interprets this statement as sarcasm?", "In the tasks of type A, it is sensible to ask the authors themselves about the intended label (e.g. Are we correct this was a joke / positive review / supportive argument?", ".", "We shall further assess the value of the system personalization.", "E.g. a user may prefer a model that correctly interprets her sarcasm even when most annotators typically don't recognize it.", "We can take inspiration from subjective measures used in evaluating spoken dialogue systems, such as A/B testing (Kohavi et al., 2014), customer satisfaction (Kelly et al., 2009; Kiseleva et al., 2016) or interestingness (Harrison et al., 2019; Oraby et al., 2018).", "Yet most of the tasks are of type B, where we implicitly try to label how a piece of text is perceived by others (e.g. hate speech, assertiveness, persuasiveness, hyperpartisan argumentation).", "Given that these others vary in their judgments (Kenny and Albright, 1987) and this variation is informative for NLP models (Plank et al., 2014; Chklovski and Mihalcea, 2003), I suggest we start caring in NLP explicitly about who these others are, and evaluate our models with respect to labels assigned by defined target groups of users (e.g. with regards to sociodemographics, personality, expertise in the task) rather than one objective truth.", "Initial exploration of this area has been started e.g. for perceived demographics (Volkova and Bachrach, 2016; Carpenter et al., 2017) and natural language inference (Pavlick and Kwiatkowski, 2019).", "The ability to automatically approximate personal characteristics of online users in order to improve language understanding algorithms requires us to consider a range of ethical concerns.", "Unfair use prevention It is almost impossible to prevent abuse of once released technology even when developed with good intentions (Jonas, 1983).", "Hence it may be more constructive to strive for an informed public, addressing the dual use danger with a preemptive disclosure (Rogaway, 2015; Hovy and Spruit, 2016) letting potential abusers know that certain illegal and unethical purposes of using personalized models are not supported, and letting potential users know about the risk.", "For example the European Ethics Guidelines for Trustworthy AI foresee that Digital records of human behaviour may allow AI systems to infer not only individuals' preferences, but also their sexual orientation, age, gender, religious or political views. and claim that it must be ensured that data collected about them will not be used to unlawfully or unfairly discriminate against them.", "Incorrect and stereotypical profiling Sociodemographic classification efforts risk invoking stereotyping and essentialism.", "Such stereotypes can cause harm even if they are accurate on average differences (Rudman and Glick, 2012).", "These can be emphasized by the semblance of objectivity created by the use of a computer algorithm (Koolen and van Cranenburgh, 2017).", "It is important we control for variables in the corpus as well as for own interpretation biases.", "Privacy protection Use of any data for personalization shall be transparent.", "Even public social media data shall be used with consent and in an aggregated manner, no individual posts shall be republished (Hewson and Buchanan, 2013).", "Regarding explicit consent, research shall take account of users' expectations (Williams et al., 2017; Shilton and Sayles, 2016; Townsend and Wallace, 2016).", "Similar issue is discussed by Smiley et al. (2017) regarding NLG ethics, as NLG systems can incorporate the background and context of a user to in-crease the communication effectiveness of the text, but as a result may be missing alternative views.", "They suggest to address this limitation by making users aware of the use of personalization, similar to addressing provenance.", "Role of industry and academia in user data collection Privacy and controllability is an auxiliary task to personalization and adaptation (Torre, 2009).", "Strictly protecting user privacy when collecting user data for model personalization is of utmost importance for preserving user trust, which is why, perhaps counter-intuitively, I encourage stronger industry-academia collaborations to facilitate a less intrusive data treatment.", "An inspiration can be taken from the concept of differential privacy (Dwork, 2008), applied e.g. in the differentially private language models (McMahan et al., 2017), which allow to customize for the user without incorporating her private vocabulary information into the public cloud model.", "Similarly, doing academic research on personalized NLP classification tasks directly within industry applications such as mobile apps with explicit user consent would enable transparent experiments at scale, being potentially more secure than gathering and manipulating one-time academic data collections offline.", "It may also contribute to better generalizability of the conclusions than strictly academic case studies that are typically limited in scale.", "Personalization as a harmful ambiguity layer Given the field bias to reporting personalization results only when successful, no unpersonaliz-able tasks have been defined so far.", "With that, one question remains open can we benefit from personalization everywhere across NLP, or are there cases where subjective treatment of a language is not desired, or even harmful?", "E.g., a legal text shall remain unambiguous to interpretation.", "On the other hand, the ability to understand it is subjective, and some users may appreciate lexical simplification (Xu et al., 2015).", "Are there objective NLP tasks as such, or can we segment all of those into an objective and subjective part of the application?", "Building upon Eisenstein (2013); Lynn et al. (2017), and Hovy (2018), I argue that, following the historical development in areas related to NLP, users are ready also for the personalization of text classification models, enabling more flexible adaptation to truly processing their natural language rather than enforcing a uniform NLP treatment for everyone.", "Reflecting the current possibilities with available web and mobile data, I propose to expand the existing user modeling approaches in deep learning models with contextual personalization, mirroring different facets of one user in dynamic, socially conditioned vector representations.", "Modeling demographic and personal variables as dynamic and social will allow to reflect the variety of ways individuals construct their identity by language, and to conduct novel sociolinguistic experiments to better understand the development in online communities.", "I suggest to also shift the focus of our evaluation strategies towards the individual aims and characteristics of the end users of our labeling models, rather than aggregating all variations into objective truths, which will allow us to pay more attention to present social biases in our models." ]
[ "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective" ]
[ "This paper studies joint models for selecting correct answer sentences among the top k provided by answer sentence selection (AS2) modules, which are core components of retrieval-based Question Answering (QA) systems.", "Our work shows that a critical step to effectively exploiting an answer set regards modeling the interrelated information between pair of answers.", "For this purpose, we build a three-way multiclassifier, which decides if an answer supports, refutes, or is neutral with respect to another one.", "More specifically, our neural architecture integrates a state-of-the-art AS2 module with the multi-classifier, and a joint layer connecting all components.", "We tested our models on WikiQA, TREC-QA, and a real-world dataset.", "The results show that our models obtain the new state of the art in AS2.", "Automated Question Answering (QA) research has received a renewed attention thanks to the diffusion of Virtual Assistants.", "Among the different types of methods to implement QA systems, we focus on Answer Sentence Selection (AS2) research, originated from TREC-QA track (Voorhees and Tice, 1999), as it proposes efficient models that are more suitable for a production setting, e.g., they are more efficient than those developed in machine reading (MR) work (Chen et al., 2017).", "Garg et al. (2020) proposed the TANDA approach based on pre-trained Transformer models, obtaining impressive improvement over the state of the art for AS2, measured on the two most used datasets, WikiQA (Yang et al., 2015) and TREC-QA (Wang et al., 2007).", "However, TANDA was applied only to pointwise rerankers (PR), e.g., simple binary classifiers.", "Bonadiman and Moschitti Work done while the author was an intern at Amazon Alexa Claim : Joe Walsh was inducted in 2001.", "Ev 1 : As a member of the Eagles, Walsh was inducted into the Rock and Roll Hall of Fame in 1998, and into the Vocal Group Hall of Fame in 2001 .", "Ev 2 : Joseph Fidler Walsh (born November 20, 1947) is an American singer songwriter, composer, multi-instrumentalist and record producer.", "Ev 3 : Walsh was awarded with the Vocal Group Hall of Fame in 2001 .", "(2020) tried to improve this model by jointly modeling all answer candidates with listwise methods, e.g., (Bian et al., 2017).", "Unfortunately, merging the embeddings from all candidates with standard approaches, e.g., CNN or LSTM, did not improve over TANDA.", "A more structured approach to building joint models over sentences can instead be observed in Fact Verification Systems, e.g., the methods developed in the FEVER challenge (Thorne et al., 2018a).", "Such systems take a claim, e.g., Joe Walsh was inducted in 2001 , as input (see Tab. 1), and verify if it is valid, using related sentences called evidences (typically retrieved by a search engine).", "For example, Ev 1 , As a member of the Eagles, Walsh was inducted into the Rock and Roll Hall of Fame in 1998, and into the Vocal Group Hall of Fame in 2001 , and Ev 3 , Walsh was awarded with the Vocal Group Hall of Fame in 2001 , support the veracity of the claim.", "In contrast, Ev 2 is neutral as it describes who Joe Walsh is but does not contribute to establish the induction.", "We conjecture that supporting evidence for answer correctness in AS2 task can be modeled with a similar rationale.", "In this paper, we design joint models for AS2 based on the assumption that, given q and a target answer candidate t , the other answer candidates, ( c 1 , ..c k ) can provide positive, negative, or neutral support to decide the correctness of t .", "Our first approach exploits Fact Checking research: we adapted a state-of-the-art FEVER system, KGAT (Liu et al., 2020), for AS2.", "We defined a claim as a pair constituted of the question and one target answer, while considering all the other answers as evidences .", "We re-trained and rebuilt all its embeddings for the AS2 task.", "Our second method, Answer Support-based Reranker (ASR), is completely new, it is based on the representation of the pair, ( q , t ), generated by state-of-the-art AS2 models, concatenated with the representation of all the pairs ( t, c i ).", "The latter summarizes the contribution of each c i to t using a max-pooling operation.", "c i can be unrelated to ( q, t ) since the candidates are automatically retrieved, thus it may introduce just noise.", "To mitigate this problem, we use an Answer Support Classifier (ASC) to learn the relatedness between t and c i by classifying their embedding, which we obtain by applying a transformer network to their concatenated text.", "ASC tunes the ( t, c i ) embedding parameters according to the evidence that c i provides to t .", "Our Answer Support-based Reranker (ASR) significantly improves the state of the art, and is also simpler than our approach based on KGAT.", "Our third method is an extension of ASR.", "It should be noted that, although ASR exploits the information from the k candidates, it still produces a score for a target t without knowing the scores produced for the other target answers.", "Thus, we jointly model the representation obtained for each target in a multi-ASR (MASR) architecture, which can then carry out a complete global reasoning over all target answers.", "We experimented with our models over three datasets, WikiQA, TREC-QA and WQA, where the latter is an internal dataset built on anonymized customer questions.", "The results show that: ASR improves the best current model for AS2, i.e., TANDA by 3%, corresponding to an error reduction of 10% in Accuracy, on both WikiQA and TREC-QA.", "We also obtain a relative improvement of 3% over TANDA on WQA, confirming that ASR is a general solution to design accurate QA systems.", "Most interestingly, MASR improves ASR by additional 2 %, confirming the benefit of joint modeling.", "Finally, it is interesting to mention that MASR improvement is also due to the use of FEVER data for pre-fine-tuning ASC, suggesting that the fact verification inference and the answer support inference are similar.", "We consider retrieval-based QA systems, which are mainly constituted by", "(i) a search engine, retrieving documents related to the questions; and", "(ii) an AS2 model, which reranks passages/sentences extracted from the documents.", "The top sentence is typically used as final answer for the users.", "The task of reranking answer-sentence candidates provided by a retrieval engine can be modeled with a classifier scoring the candidates.", "Let q be an element of the question set, Q , and A = { c 1 , . . . , c n } be a set of candidates for q , a reranker can be defined as R : Q ( A ) ( A ) , where ( A ) is the set of all permutations of A .", "Previous work targeting ranking problems in the text domain has classified reranking functions into three buckets: pointwise, pairwise, and listwise methods.", "Pointwise reranking: This approach learns p ( q, c i ) , which is the probability of c i correctly answering q , using a standard binary classification setting.", "The final rank is simply obtained sorting c i , based on p ( q, c i ) .", "Previous work estimates p ( q, c i ) with neural models (Severyn and Moschitti, 2015), also using attention mechanisms, e.g., Compare-Aggregate (Yoon et al., 2019), inter-weighted alignment networks (Shen et al., 2017), and pre-trained Transformer models, which are the state of the art.", "Garg et al. (2020) proposed TANDA, which is the current most accurate model on WikiQA and TREC-QA.", "Pairwise reranking: The method considers binary classifiers of the form ( q, c i , c j ) for determining the partial rank between c i and c j , then the scoring function p ( q, c i ) is obtained by summing up all the contributions with respect to the target candidate t = c i , e.g., p ( q, c i ) = (cid:80) j ( q, c i , c j ) .", "There has been a large body of work preceding Transformer models, e.g., (Laskar et al., 2020; Tayyar Madabushi et al., 2018; Rao et al., 2016).", "However, these methods are largely outperformed by the pointwise TANDA model.", "Listwise reranking: This approach, e.g., (Bian et al., 2017; Cao et al., 2007; Ai et al., 2018), aims at learning p ( q, ) , ( A ) , using the information on the entire set of candidates.", "The loss function for training such networks is constituted by the contribution of all elements of its ranked items.", "The closest work to our research is by Bonadiman and Moschitti (2020), who designed several joint models.", "These improved early neural networks based on CNN and LSTM for AS2, but failed to improve the state of the art using pre-trained Transformer models.", "MR is a popular QA task that identifies an answer string in a paragraph or a text of limited size for a question.", "Its application to retrieval scenario has also been studied (Chen et al., 2017; Hu et al., 2019; Kratzwald and Feuerriegel, 2018).", "However, the large volume of retrieved content makes their use not practical yet.", "Moreover, the joint modeling aspect of MR regards sentences from the same paragraphs.", "Jin et al. (2020) use the relation between candidates in Multi-task learning approach for AS2.", "However, they do not exploit transformer models, thus their results are rather below the state of the art.", "In contrast with the work above, our modeling is driven by an answer support strategy, where the pieces of information are taken from different documents.", "This makes our model even more unique; it allows us to design innovative joint models, which are still not designed in any MR systems.", "Fact verification has become a social need given the massive amount of information generated daily.", "The problem is, therefore, becoming increasingly important in NLP context (Mihaylova et al., 2018).", "In QA, answer verification is directly relevant due to its nature of content delivery (Mihaylova et al., 2019).", "The problem has been explored in MR setting (Wang et al., 2018).", "Zhang et al. (2020a) also proposed to fact check for product questions using additional associated evidence sentences.", "The latter are retrieved based on similarity scores computed with both TF-IDF and sentence-embeddings from pre-trained BERT models.", "While the process is technically sound, the retrieval of evidence is an expensive process, which is prohibitive to scale in production.", "We instead address this problem by leveraging the top answer candidates.", "listwise strategies.", "One simple and effective method to build an answer selector is to use a pre-trained Transformer model, adding a simple classification layer to it, and fine-tuning the model on the AS2 task.", "Specifically, q = Tok q 1 ,...,Tok qN and c = Tok c 1 ,...,Tok cM are encoded in the input of the Transformer by delimiting them using three tags: [CLS], [SEP] and [EOS], inserted at the beginning, as separator, and at the end, respectively.", "This input is encoded as three embeddings based on tokens, segments and their positions, which are fed as input to several layers (up to 24).", "Each of them contains sublayers for multi-head attention, normalization and feed forward processing.", "The result of this transformation is an embedding, E , representing ( q, c ) , which models the dependencies between words and segments of the two sentences.", "For the downstream task, E is fed (after applying a non-linearity function) to a fully connected layer having weights: W and B .", "The output layer can be used to implement the task function.", "For example, a softmax can be used to model the probability of the question/candidate pair classification, as: p ( q, c ) = softmax ( W tanh ( E ( q, c )) + B ) .", "We can train this model with log cross-entropy loss: L = (cid:80) l { 0 , 1 } y l log ( y l ) on pairs of texts, where y l is the correct and incorrect answer label, y 1 = p ( q, c ) , and y 0 = 1 p ( q, c ) .", "Training the Transformer from scratch requires a large amount of labeled data, but it can be pre-trained using a masked language model, and the next sentence prediction tasks, for which labels can be automatically generated.", "Several methods for pretraining Transformer-based language models have been proposed, e.g., BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), AlBERT (Lan et al., 2020).", "To better show the potential of our approach and the complexity of the task, we designed three joint model baselines based on:", "(i) a multiclassifier approach (a listwise method), and", "(ii) a pairwise joint model operating over k + 1 candidates, and our adaptation of KGAT model (a pairwise method).", "is also a Transformer-based architecture: we concatenate the question with the top k +1 answer candidates,", "candidates, i.e., ( q [ SEP ] c 1 [ SEP ] c 2 . . . [ SEP ] c k +1 ) , and provide this input to the same Transformer model used for pointwise reranking.", "We use the final hidden vector E corresponding to the first input token [ CLS ] generated by the Transformer, and a classification layer with weights W R ( k +1) | E | , and train the model using a standard cross-entropy classification loss: y log ( softmax ( EWT )) , where y is a one-hot vector representing labels for the k + 1 candidates, i.e., | y | = k + 1 .", "We use a transformer model fine-tuned with the TANDA-RoBERTa-base or large models, i.e., RoBERTa models fine-tuned on ASNQ (Garg et al., 2020).", "The scores for the candidate answers are calculated as (cid:0) p ( c 1 ) ,", ".., p ( c k +1 ) (cid:1) = softmax ( EWT ) .", "Then, we rerank c i according their probability.", "Joint Model Pairwise Our second baseline is similar to the first.", "We concatenate the question with each c i to constitute the ( q, c i ) pairs, which are input to the Transformer, and we use the first input token [ CLS ] as the representation of each ( q, c i ) pair.", "Then, we concatenate the embedding of the pair containing the target candidate, ( q, t ) with the embedding of all the other candidates' [ CLS ] .", "( q, t ) is always in the first position.", "We train the model using a standard classification loss.", "At classification time, we select one target candidate at a time, and set it in the first position, followed by all the others.", "We classify all k + 1 candidates and use their score for reranking them.", "It should be noted that to qualify for a pairwise approach, Joint Model Pairwise should use a ranking loss.", "However, we always use standard cross-entropy loss as it is more efficient and the different is performance is negligible.", "Joint Model with KGAT Liu et al. (2020) presented an interesting model, Kernel Graph Attention Network (KGAT), for fact verification: given a claimed fact f , and a set of evidences Ev = { ev 1 , ev 2 , . . . , ev m } , their model carries out joint reasoning over Ev , e.g., aggregating information to estimate the probability of f to be true or false, p ( y | f, Ev ) , where y { true, false } .", "The approach is based on a fully connected graph, G , whose nodes are the n i = ( f, ev i ) pairs, and p ( y | f, Ev ) = p ( y | f, ev i , Ev ) p ( ev i | f, Ev ) , where p ( y | f, ev i , Ev ) = p ( y | n i , G ) is the label probability in each node i conditioned on the whole graph, and p ( ev i | f, Ev ) = p ( n i | G ) is the probability of selecting the most informative evidence.", "KGAT uses an edge kernel to perform a hierarchical attention mechanism, which propagates information between nodes and aggregate evidences.", "We built a KGAT model for AS2 as follows: we replace", "(i) ev i with the set of candidate answers c i , and", "(ii) the claim f with the question and a target answer pair, ( q, t ) .", "KGAT constructs the evidence graph G by using each claim-evidence pair as a node, which, in our case, is (( q, t ) , c i ) , and connects all node pairs with edges, making it a fully-connected evidence graph.", "This way, sentence and token attention operate over the triplets, ( q, t, c i ) , establishing semantic links, which can help to support or undermine the correctness of t .", "The original KGAT aggregates all the pieces of information we built, based on their relevance, to determine the probability of t .", "As we use AS2 data, the probability will be about the correctness of t .", "More in detail, we initialize the node representation using the contextual embeddings obtained with two TANDA-RoBERTa-base models 1 : the first produces the embedding of ( q, t ) , while the second outputs the embedding of ( q, c i ) .", "Then, we apply a max-pooling operation on these two to get the final node representation.", "The rest of the architecture is identical to the original KGAT.", "Finally, at test time, we select one c i at a time, as the target t , and compute its probability, which ranks c i .", "We proposed the Answer Support Reranker (ASR), which uses an answer pair classifier to provide evidence to a target answer t .", "Given a question q , and a subset of its topk +1 ranked answer candidates, A (reranked by an AS2 model), we build a function, : Q C C k R such that ( q, t, A \\ { t } ) provides the probability of t to be correct, where C is the set of sentence-candidates.", "We also design a multi-classifier MASR, which combines k ASR models, one for each different target answer.", "We developed ASR architecture described in Figure 1c.", "This consists of three main components: 1. a Pointwise Reranker (PR), which provides the embedding of the input ( q, t ) , described in Figure 1a.", "This is essentially the state-of-the-art AS2 model based on the TANDA approach applied to RoBERTa pre-trained transformer.", "2. To reduce the noise that may be introduced by irrelevant c i , we use the Answer Support Classifier (ASC), which classifies each ( t, c i ) in one of the following four classes: 0 : t and c i are both correct, 1 : t is correct while c i is not, 2 : vice versa, and 3 : both incorrect.", "This multi-classifier, described in Figure 1b, is built on top a RoBERTa Transformer, which produced a PairWise Representation (PWR).", "ASC is trained end-to-end with the rest of the network in a multi-task learning fashion, using its specific cross-entropy loss, computed with the labels above.", "and c i are the top-candidates reranked by PR.", "The k representations are summarized by applying a max-pooling operation, which will aggregate all the supporting or not supporting properties of the candidates with respect to the target answer.", "The concatenation of the PR embedding with the max-pooling embedding is given as input to the final classification layer, which scores t with respect to q , also using the information from the other candidates.", "For training and testing, we select a t from the k + 1 candidates of q at a time, and compute its score.", "This way, we can rerank all the k + 1 candidates with their scores.", "Implementation details: ASR is a PR that also exploits the relation between t and A \\ { t } .", "We use RoBERTa to generate the [ CLS ] R d embedding of ( q, t ) = E t .", "We denote with E j the [ CLS ] output by another RoBERTa Transformer applied to answer pairs, i.e., ( t, c j ) .", "Then, we concatenate E t to the max-pooling tensor from E 1 ,", ".., E k : V = [ E t : Maxpool ([ E 1 , .., E k ])] , (1) where V R 2 d is the final representation of the target answer t .", "Then, we use a standard feed-forward network to implement a binary classification layer: p ( y i | q, t, C k ) = softmax ( V WT + B ) , where W R 2 2 d and B are parameters to transform the representation of the target answer t from dimension 2 d to dimension 2 , which represents correct or incorrect labels.", "ASC labels There can be different interpretations when attempting to define labels for answer pairs.", "An alternative to the definition illustrated above is to use the following FEVER compatible encoding: 0 : t is correct, while c i can be any value, as also an incorrect c i may provide important context (corresponding to FEVER Support label); 1 : t is incorrect, c i correct, since c i can provide evidence that t is not similar to a correct answer (corresponding to FEVER Refutal label); and 2 : both are incorrect, in this case, nothing can be told (corresponding to FEVER Neutral la-bel).", "ASR still selects answers with a pointwise approach 2 .", "This means that we can improve it by 2 Again, using ranking loss did not provide a significant improvment.", "building a listwise model, to select the best answer for each question, by utilizing the information from all target answers.", "In particular, the architecture of MASR shown in Figure 1d is made up of two parts:", "(i) a list of ASR containing k + 1 ASR blocks, in which each ASR block provides the representation of a target answer t .", "(ii) A final multiclassifier and a softmax function, which scores each t from k + 1 embedding concatenation and selects the one with highest score.", "For training and testing, we select the t from the k + 1 candidates of q based on a softmax output at a time.", "Implementation details: The goal of MASR is to measure the relation between k + 1 target answers, t 0 ,", ".., t k .", "The representation of each target answer is the embedding V R 2 d from Equation 1 in ASR.", "Then, we concatenate the hidden vectors of k + 1 target answers to form a matrix V ( q,k +1) R ( k +1) 2 d .", "We use this matrix and a classification layer weights W R 2 d , and compute a standard multi-class classification loss: LMASR = y log ( softmax ( V ( q,k +1) WT ) , (2) where y is a one-hot-vector, and | y | = | k + 1 | .", "In these experiments, we compare our models: KGAT, ASR and MASR with pointwise models, which are the state of the art for AS2.", "We also compare them with our joint model baselines (pairwise and listwise).", "Finally, we provide an error analysis.", "We used two most popular AS2 datasets, and one real world application dataset we built to test the generality of our approach.", "WikiQA is a QA dataset (Yang et al., 2015) containing a sample of questions and answer-sentence candidates from Bing query logs over Wikipedia.", "The answers are manually labeled.", "We follow the most used setting: training with all the questions that have at least one correct answer, and validating and testing with all the questions having at least one correct and one incorrect answer.", "TREC-QA is another popular QA benchmark by Wang et al. (2007).", "We use the same splits of the original data, following the common setting of previous work, e.g., (Garg et al., 2020).", "WQA The Web-based Question Answering is a dataset built by Alexa AI as part of the effort to improve understanding and benchmarking in QA systems.", "The creation process includes the following steps:", "(i) given a set of questions we collected from the web, a search engine is used to retrieve up to 1,000 web pages from an index containing hundreds of millions pages.", "(ii) From the set of retrieved documents, all candidate sentences are extracted and ranked using AS2 models from (Garg et al., 2020).", "Finally,", "(iii) top candidates for each question are manually assessed as correct or incorrect by human judges.", "This allowed us to obtain a richer variety of answers from multiple sources with a higher average number of answers.", "Table 2 reports the corpus statistics of WikiQA, TREC-QA, and WQA 3 .", "FEVER is a large-scale public corpus, proposed by Thorne et al. (2018a) for fact verification task, consisting of 185,455 annotated claims from 5,416,537 documents from the Wikipedia dump in June 2017.", "All claims are labelled as Supported, Refuted or Not Enough Info by annotators.", "Table 3 shows the statistics of the dataset, which remains the same as in (Thorne et al., 2018b).", "Metrics The performance of QA systems is typically measured with Accuracy in providing correct answers, i.e., the percentage of correct responses.", "This is also referred to Precision-at-1 (P@1) in the context of reranking, while standard Precision and Recall are not essential in our case as we assume the system does not abstain from providing answers.", "We also use Mean Average Precision (MAP) and Mean Reciprocal Recall (MRR) evaluated on the test set, using the entire set of candidates for each 3 The public version of WQA will be released in the short-term future.", "Please search for a publication with title WQA: A Dataset for Web-based Question Answering Tasks on arXiv.org.", "Models We use the pre-trained RoBERTa-Base (12 layer) and RoBERTa-Large-MNLI (24 layer) models, which were released as checkpoints for use in downstream tasks 4 .", "Reranker training We adopt Adam optimizer (Kingma and Ba, 2014) with a learning rate of 2e-5 for the transfer step on the ASNQ dataset (Garg et al., 2020), and a learning rate of 1e-6 for the adapt step on the target dataset.", "We apply early stopping on the development set of the target corpus for both fine-tuning steps based on the highest MAP score.", "We set the max number of epochs equal to 3 and 9 for the adapt and transfer steps, respectively.", "We set the maximum sequence length for RoBERTa to 128 tokens.", "KGAT and ASR training Again, we use the Adam optimizer with a learning rate of 2e-6 for training the ASR model on the target dataset.", "We utilize 1 Tesla V100 GPU with 32GB memory and a train batch size of eight.", "We set the maximum sequence length for RoBERTa Base/Large to 130 tokens and the number of training epochs to 20.", "The other training configurations are the same of the original KGAT model from (Liu et al., 2020).", "We use two transformer models for ASR: a RoBERTa 4 https://github.com/pytorch/fairseq Base/Large for PR, and one for ASC.", "We set the maximum sequence length for RoBERTa to 128 tokens and the number of epochs to 20.", "MASR training We use the same configuration of the ASR training, including the optimizer type, learning rate, the number of epochs, GPU type, maximum sequence length, etc.", "Additionally, we design two different models MASR-F, using an ASC classifier targeting the FEVER labels, and MASR-FP, which initializes ASC with the data from FEVER.", "This is possible as the labels are compatible.", "The selection of the hyper-parameter k , i.e., the number of candidates to consider for supporting a target answer is rather tricky.", "Indeed, the standard validation set is typically used for tuning PR.", "This means that the candidates PR moves to the top k +1 positions are optimistically accurate.", "Thus, when selecting also the optimal k on the same validation set, there is high risk to overfit the model.", "We solved this problem by running a PR version not heavily optimized on the dev.", "set, i.e., we randomly choose a checkpoint after the standard three epochs of fine-tuning of RoBERTa transformer.", "Additionally, we tuned k only using the WQA dev.", "set, which contains 36 , 000 Q/A pairs.", "WikiQA and TREC-QA dev.", "sets are too small to be used (121 and 65 questions, respectively).", "Fig. 2 plots the improvement of four different models, Joint Model Multi-classifier, Joint Model Pairwise, KGAT, and ASR, when using different k values.", "Their best results are reached for 5, 3, 2, and 3, respectively.", "We note that the most reliable curve shape (convex) is the one of ASR and Joint Model Pairwise.", "Table 4 reports the P@1, MAP and MRR of the rerankers, and different answer supporting models on WikiQA, TREC-QA and WQA datasets.", "As WQA is an internal dataset, we only report the improvement over PR in the tables.", "All models use RoBERTa-Base pre-trained checkpoint and start from the same set of k candidates reranked by PR (state-of-the-art model).", "The table shows that: PR replicates the MAP and MRR of the state-of-the-art reranker by Garg et al. (2020) on WikiQA.", "Joint Model Multi-classifier performs lower than PR for all measures and all datasets.", "This is in line with the findings of Bonadiman and Moschitti (2020), who also did not obtain improvement when jointly used all the candidates altogether in a representation.", "Joint Model Pairwise differs from ASR as it concatenates the embeddings of the ( q, c i ) , instead of using max-pooling, and does not use any Answer Support Classifier (ASC).", "Still, it exploits the idea of aggregating the information of all pairs ( q, c i ) with respect to a target answer t , which proves to be effective, as the model improves on PR over all measures and datasets.", "Our KGAT version for AS2 also improves PR over all datasets and almost all measures, confirming that the idea of using candidates as support of the target answer is generally valid.", "However, it is not superior to Joint Model Pairwise.", "ASR achieves the highest performance among all models (but MASR-FP on WQA), all datasets, and all measures.", "For example, it outperforms PR by almost 3 absolute percent points in P@1 on WikiQA, and by almost 6 points on TREC from 91.18% to 97.06%, which corresponds to an error reduction of 60%.", "MASR and MASR-F do not achieve better performance than Joint Model Pairwise on WikiQA and TREC, although MASR outperforms all baselines and even ASR on WQA.", "This suggests that the significantly higher number of parameters of MASR cannot be trained on small corpus, while WQA has a sufficient number of examples.", "MASR-FP exploiting FEVER for the initialization of ASC performs better than MASR and MASR-F on WikiQA and TREC.", "Interestingly, it significantly outperforms ASR by 2% on WQA.", "This confirms the potential of the model when enough training data is available.", "We perform randomization test (Yeh, 2000) to verify if the models significantly differ in terms of prediction outcome.", "We use 100,000 trials for each calculation.", "The results confirm the statistically significant difference between ASR and all the baselines, with p < 0.05 for WikiQA, and between ASR and all models (i.e., including also KGAT) on WQA.", "As the state of the art for AS2 is obtained using RoBERTa Large, we trained KGAT and ASR using this pre-trained language model.", "Table 5 also reports the comparison with PR, which is the official state of the art.", "Again, our PR replicates the results of Garg et al. (2020), obtaining slightly lower performance on WikiQA but higher on TREC-QA.", "KGAT performs lower than PR on both datasets.", "ASR establishes the new state of the art on WikiQA with an MAP of 92.80 vs. 92.00.", "The P@1 also significantly improves by 2%, i.e., achieving 89.71, which is impressively high.", "Also, on TREC-QA, ASR outperforms all models, being on par with PR regarding P@1.", "The latter is 97.06, which corresponds to mistaking the answers of only two questions.", "We manually checked these and found out that these were two annotation errors: ASR achieves perfect accuracy while PR only mistakes one answer.", "Of course, this just provides evidence that PR based on RoBERTa-Large solves the task of selecting the best answers (i.e., measuring P@1 on this dataset is not meaningful anymore).", "Table 6 reports the accuracy of ASC inside to different models.", "In ASR, it uses 4-way categories, while in MASR-based models, it uses the three WikiQA TREC-QA WQA ACC F1 ACC F1 ACC F1 ASR 0.59 0.00 0.56 0.80 0.58 0.64 MASR 0.46 0.00 0.45 0.62 0.53 0.61 MASR-F 0.46 0.00 0.64 0.78 0.58 0.68 MASR-FP 0.49 0.37 0.65 0.73 0.59 0.69 Table 6: The Accuracy and F1 of category 0 for ASCFEVER labels (see Sec. 4.1).", "ACC is the overall accuracy while F1 refers to the category 0.", "We note that ASC in MASR-FP achieves the highest accuracy with respect to the average over all datasets.", "This happens since we pre-fine-tuned it with the FEVER data.", "We analyzed examples for which ASR is correct and PR is not.", "Tab.", "7 shows that, given q and k = 3 candidates, PR chooses c 1 , a suitable but wrong answer.", "This probably happens since the answer best matches the syntactic/semantic pattern of the question, which asks for a type of color , indeed, the answer offers such type, primary colors .", "PR does not rely on any background information that can support the set of colors in the answer.", "In contrast, ASR selects c 2 as it can rely on the support of other answers.", "Its ASC provides an average score for the category 0 (both members are correct) of c 2 , i.e., 1 k (cid:80) i (cid:54) =2 ASC ( c 2 , c i ) = 0 .", "653 , while for c 1 the average score is significant lower, i.e., 0.522.", "This provides higher support for c 2 , which is used by ASR to rerank the output of PR.", "Tab.", "8 shows an interesting case where all the sentences contain the required information, i.e., February.", "However, PR and ASR both choose answer c 0 , which is correct but not natural, as it provides the requested information indirectly.", "Also, it contains a lot of ancillary information.", "In contrast, MASR is able to rerank the best answer, c 1 , in the top position.", "We have proposed new joint models for AS2.", "ASR encodes the relation between the target answer and all the other candidates, using an additional Transformer model, and an Answer Support Classifier, while MASR jointly models the ASR representations for all target answers.", "We extensively tested KGAT, ASR, MASR, and other joint model baselines we designed.", "The results show that our models can outperform the state of the art.", "Most interestingly, ASR constantly outperforms all the models (but MASR-FP), on all datasets, through all measures, and for both base and large transformers.", "For example, ASR q : What kind of colors are in the rainbow?", "q : What's the month of Valentine's day?", "c 0 : Celebrated on February 14 every year, saint Valentine's day or Valentine's day is the traditional day on which lovers convey their love to each other by sending Valentine's cards, sometimes even anonymously.", "c 1 : February is historically chosen to be the month of love and romance and the month to celebrate Valentine's day.", "c 2 : In order for today to be Valentine's day, it's necessary that today is in the month of February.", "c 3 : Every year, Valentine's day is celebrated on February 14 in many countries around the world.", "achieves the best reported results, i.e., MAP values of 92.80% and 94.88, on WikiQA and TREC-QA, respectively.", "MASR improves ASR by 2% on WQA, since this contains enough data to train the ASR representations jointly." ]
[ "method", "result", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "objective", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "method", "abstain", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Nowadays, open-domain dialogue models can generate acceptable responses according to the historical context based on the large-scale pre-trained language models.", "However, they generally concatenate the dialogue history directly as the model input to predict the response, which we named as the flat pattern and ignores the dynamic information flow across dialogue utterances.", "In this work, we propose the DialoFlow model, in which we introduce a dynamic flow mechanism to model the context flow, and design three training objectives to capture the information dynamics across dialogue utterances by addressing the semantic influence brought about by each utterance in large-scale pre-training.", "Experiments on the multi-reference Reddit Dataset and DailyDialog Dataset demonstrate that our DialoFlow significantly outperforms the DialoGPT on the dialogue generation task.", "Besides, we propose the Flow score , an effective automatic metric for evaluating interactive human-bot conversation quality based on the pre-trained DialoFlow, which presents high chatbot-level correlation ( r = 0 . 9 ) with human ratings among 11 chatbots.", "Code and pre-trained models will be public.", "1 1 Introduction Recent intelligent open-domain chatbots (Adiwar-dana et al., 2020; Bao et al., 2020; Smith et al., 2020) have made substantial progress thanks to the rapid development of the large-scale pre-training approaches (Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020) and the large amount of conversational data (Dinan et al., 2019; Baum-gartner et al., 2020; Smith et al., 2020).", "However, Joint work with Pattern Recognition Center, WeChat AI, Tencent Inc.", "Yang Feng is the corresponding author.", "Work was done when Zekang Li and Zhengcong Fei were intern at WeChat AI.", "1 https://github.com/ictnlp/DialoFlow W h e n w a s t h i s b r e a k i n ?", "large-scale dialogue pre-training is still challenging.", "Most of the previous work on dialogue history modeling mainly fall into two groups.", "One group of works generally concatenate the dialogue history as the model input and predict the response (Zhang et al., 2020; Smith et al., 2020; Bao et al., 2020), named as flat pattern , which is commonly adopted in the large-scale pre-training.", "However, Sankar et al. (2019) demonstrate that flat concatenation is likely to ignore the conversational dynamics across utterances in the dialogue history.", "Another group of works employ hierarchical modeling to encode the dialogue history (Serban et al., 2016b; Shan et al., 2020; Gu et al., 2020), in which the utterances are separately encoded and then fed into an utterance-level encoder.", "These approaches lack the history information when encoding each individual utterance, while the history information is essential for understanding dialogue utterances.", "Thus, all the aforementioned methods are deficient in modeling the dynamic information in the dialogue history.", "process that humans always consider the goal or influence of the next response before they continue the conversation (Brown-Schmidt and Konopka, 2015), we propose the DialoFlow to model the dynamic information flow in the dialogue history by addressing the semantic influence brought about by each utterance.", "As shown in Figure 1, we define the dense representation of the dialogue history at different utterances as the context s (gray dot line) and the context transformation as the semantic influence brought by each utterance.", "In particular, our DialoFlow constructs the process of the utterance-level history context flow.", "Correspondingly, the semantic influence of each utterance can be measured by the difference between two adjacent contexts, which will be further used to guide the current response generation.", "Practically, we first employ a transformer to encode the whole conversation to get the dense context representation.", "Then we design a unidirectional Flow module to capture the context flow on the utterance level, and design three training objectives to model the context flow and measure the semantic influence brought about by each utterance: 1) Context Flow Modeling , which aims to capture the context flow schema.", "2) Semantic Influence Modeling , which targets to measure the predicted semantic influence.", "3) Response Generation Modeling , which is to generate the response under the guidance of the predicted semantic influence.", "Furthermore, to demonstrate the effect of modeling dynamic information flow in the dialogue understanding, we propose the Flow score based on the DialoFlow, an automatic reference-free evaluation metric for interactive dialogue evaluation by measuring the semantic influence perplexity.", "We pre-train the proposed DialoFlow on the large-scale Reddit comments and conduct experiments on dialogue generation and interactive dialogue quality evaluation.", "For dialogue generation, DialoFlow achieves significant improvements on the Reddit multi-reference dataset and the DailyDialog dataset compared to the baseline DialoGPT (Zhang et al., 2020).", "For interactive dialogue quality evaluation, our proposed Flow score obtains an impressively high chatbot-level correlation ( r = 0 . 9 ) with human ratings on 2200 human-bot dialogues from 11 chatbots.", "the dialogue history by addressing the semantic influence brought about by each utterance.", "Besides, we design an automatic reference-free evaluation metric Flow score based on the pre-trained DialoFlow for interactive dialogue quality evaluation.", "The experimental results illustrate that DialoFlow achieves significant improvements on dialogue generation compared to the DialoGPT, and Flow score shows impressively high chatbot-level correlation ( r = 0 . 9 ) with human ratings.", "The proposed DialoFlow models the dynamic information flow in the whole dialogue history by addressing the semantic influence brought about by each utterance in sequence.", "Before introducing the DialoFlow in detail, we first define some terms.", "Formally, let D = { u 1 , u 2 , ..., u N } denotes a whole dialogue.", "And for each utterance u k = { u 1 k , u 2 k , ..., u Tk } where u tk denotes the t -th word in the k -th utterance.", "We further denote u <k = { u 1 , u 2 , ..., u k 1 } as the dialogue history at the k -th utterance.", "Besides, the dense representation of the dialogue history u <k at the k -th utterance is represented as the context C k .", "And the difference between the new context C k +1 at the ( k +1)-th utterance and the previous contexts C k at the k -th utterance can be defined as the semantic influence I k of the k -th utterance, which can be formulated as: I k = C k +1 C k .", "In our method, DialoFlow first encodes the dialogue history and predicts the future context C (cid:48) k +1 according to all the previous history context C 1 , C 2 , ..., C k .", "Then at the response generation stage, the model acquires the predicted target semantic influence I (cid:48) k , and generate the target response u k auto-regressively considering both the predicted semantic influence and the historical sub-sentences.", "Specifically, as shown in Figure 2, DialoFlow models the context flow by designing a unidirectional Flow module upon the transformer, and we introduce three multi-task training objectives to supervise the context flow, semantic influence, and Layer Norm Multi-Head Attention Layer Norm Feed Forward Transformer Block L-1 Transformer Block L [C] Flow Module PE+ PE+ C 2 <latexit sha1_base64=\"2OjzjvwvlGQCh7kVC/I3JEQD7P8=\">AAACz3icjVHLSsNAFD2Nr/quunQTLIKrkhRBl8VuXLZgH9CWMplO29C8SCZKKRW3/oBb/SvxD/QvvDOm4APRCUnOnHvPmbn3OpHnJtKyXnLG0vLK6lp+fWNza3tnt7C330zCNOaiwUMvjNsOS4TnBqIhXemJdhQL5jueaDmTqoq3rkWcuGFwJaeR6PlsFLhDlzNJVLfrMzl2hrPqvF/uF4pWydLL/AnsDBSRrVpYeEYXA4TgSOFDIIAk7IEhoacDGxYi4nqYERcTcnVcYI4N0qaUJSiDETuh74h2nYwNaK88E63mdIpHb0xKE8ekCSkvJqxOM3U81c6K/c17pj3V3ab0dzIvn1iJMbF/6RaZ/9WpWiSGONc1uFRTpBlVHc9cUt0VdXPzU1WSHCLiFB5QPCbMtXLRZ1NrEl276i3T8VedqVi151luijd1Sxqw/X2cP0GzXLKtkl0/LVYuslHncYgjnNA8z1DBJWpokHeEBzziyagbN8atcfeRauQyzQG+LOP+HfaelA0=</latexit> <latexit sha1_base64=\"2OjzjvwvlGQCh7kVC/I3JEQD7P8=\">AAACz3icjVHLSsNAFD2Nr/quunQTLIKrkhRBl8VuXLZgH9CWMplO29C8SCZKKRW3/oBb/SvxD/QvvDOm4APRCUnOnHvPmbn3OpHnJtKyXnLG0vLK6lp+fWNza3tnt7C330zCNOaiwUMvjNsOS4TnBqIhXemJdhQL5jueaDmTqoq3rkWcuGFwJaeR6PlsFLhDlzNJVLfrMzl2hrPqvF/uF4pWydLL/AnsDBSRrVpYeEYXA4TgSOFDIIAk7IEhoacDGxYi4nqYERcTcnVcYI4N0qaUJSiDETuh74h2nYwNaK88E63mdIpHb0xKE8ekCSkvJqxOM3U81c6K/c17pj3V3ab0dzIvn1iJMbF/6RaZ/9WpWiSGONc1uFRTpBlVHc9cUt0VdXPzU1WSHCLiFB5QPCbMtXLRZ1NrEl276i3T8VedqVi151luijd1Sxqw/X2cP0GzXLKtkl0/LVYuslHncYgjnNA8z1DBJWpokHeEBzziyagbN8atcfeRauQyzQG+LOP+HfaelA0=</latexit> <latexit sha1_base64=\"2OjzjvwvlGQCh7kVC/I3JEQD7P8=\">AAACz3icjVHLSsNAFD2Nr/quunQTLIKrkhRBl8VuXLZgH9CWMplO29C8SCZKKRW3/oBb/SvxD/QvvDOm4APRCUnOnHvPmbn3OpHnJtKyXnLG0vLK6lp+fWNza3tnt7C330zCNOaiwUMvjNsOS4TnBqIhXemJdhQL5jueaDmTqoq3rkWcuGFwJaeR6PlsFLhDlzNJVLfrMzl2hrPqvF/uF4pWydLL/AnsDBSRrVpYeEYXA4TgSOFDIIAk7IEhoacDGxYi4nqYERcTcnVcYI4N0qaUJSiDETuh74h2nYwNaK88E63mdIpHb0xKE8ekCSkvJqxOM3U81c6K/c17pj3V3ab0dzIvn1iJMbF/6RaZ/9WpWiSGONc1uFRTpBlVHc9cUt0VdXPzU1WSHCLiFB5QPCbMtXLRZ1NrEl276i3T8VedqVi151luijd1Sxqw/X2cP0GzXLKtkl0/LVYuslHncYgjnNA8z1DBJWpokHeEBzziyagbN8atcfeRauQyzQG+LOP+HfaelA0=</latexit> <latexit sha1_base64=\"2OjzjvwvlGQCh7kVC/I3JEQD7P8=\">AAACz3icjVHLSsNAFD2Nr/quunQTLIKrkhRBl8VuXLZgH9CWMplO29C8SCZKKRW3/oBb/SvxD/QvvDOm4APRCUnOnHvPmbn3OpHnJtKyXnLG0vLK6lp+fWNza3tnt7C330zCNOaiwUMvjNsOS4TnBqIhXemJdhQL5jueaDmTqoq3rkWcuGFwJaeR6PlsFLhDlzNJVLfrMzl2hrPqvF/uF4pWydLL/AnsDBSRrVpYeEYXA4TgSOFDIIAk7IEhoacDGxYi4nqYERcTcnVcYI4N0qaUJSiDETuh74h2nYwNaK88E63mdIpHb0xKE8ekCSkvJqxOM3U81c6K/c17pj3V3ab0dzIvn1iJMbF/6RaZ/9WpWiSGONc1uFRTpBlVHc9cUt0VdXPzU1WSHCLiFB5QPCbMtXLRZ1NrEl276i3T8VedqVi151luijd1Sxqw/X2cP0GzXLKtkl0/LVYuslHncYgjnNA8z1DBJWpokHeEBzziyagbN8atcfeRauQyzQG+LOP+HfaelA0=</latexit> C 0 3 <latexit sha1_base64=\"/4AFNNMB+VGplUU8zEc+J1/3lgk=\">AAAC0HicjVHLTsJAFD3UF+ILdemmkRhdkVZNdElk4xKNPBIgpB0GaOjLdmokhBi3/oBb/SrjH+hfeGcsiUqMTtP2zLn3nJl7rx26TiwM4zWjzc0vLC5ll3Mrq2vrG/nNrVocJBHjVRa4QdSwrZi7js+rwhEub4QRtzzb5XV7WJbx+g2PYifwr8Qo5G3P6vtOz2GWIKrd8iwxsHvj8qRztN/JF4yioZY+C8wUFJCuSpB/QQtdBGBI4IHDhyDswkJMTxMmDITEtTEmLiLkqDjHBDnSJpTFKcMidkjfPu2aKevTXnrGSs3oFJfeiJQ69kgTUF5EWJ6mq3iinCX7m/dYecq7jehvp14esQIDYv/STTP/q5O1CPRwqmpwqKZQMbI6lrokqivy5vqXqgQ5hMRJ3KV4RJgp5bTPutLEqnbZW0vF31SmZOWepbkJ3uUtacDmz3HOgtph0TSK5sVxoXSWjjqLHezigOZ5ghLOUUGVvK/xiCc8a5farXan3X+maplUs41vS3v4AIAelD8=</latexit> <latexit sha1_base64=\"/4AFNNMB+VGplUU8zEc+J1/3lgk=\">AAAC0HicjVHLTsJAFD3UF+ILdemmkRhdkVZNdElk4xKNPBIgpB0GaOjLdmokhBi3/oBb/SrjH+hfeGcsiUqMTtP2zLn3nJl7rx26TiwM4zWjzc0vLC5ll3Mrq2vrG/nNrVocJBHjVRa4QdSwrZi7js+rwhEub4QRtzzb5XV7WJbx+g2PYifwr8Qo5G3P6vtOz2GWIKrd8iwxsHvj8qRztN/JF4yioZY+C8wUFJCuSpB/QQtdBGBI4IHDhyDswkJMTxMmDITEtTEmLiLkqDjHBDnSJpTFKcMidkjfPu2aKevTXnrGSs3oFJfeiJQ69kgTUF5EWJ6mq3iinCX7m/dYecq7jehvp14esQIDYv/STTP/q5O1CPRwqmpwqKZQMbI6lrokqivy5vqXqgQ5hMRJ3KV4RJgp5bTPutLEqnbZW0vF31SmZOWepbkJ3uUtacDmz3HOgtph0TSK5sVxoXSWjjqLHezigOZ5ghLOUUGVvK/xiCc8a5farXan3X+maplUs41vS3v4AIAelD8=</latexit> <latexit sha1_base64=\"/4AFNNMB+VGplUU8zEc+J1/3lgk=\">AAAC0HicjVHLTsJAFD3UF+ILdemmkRhdkVZNdElk4xKNPBIgpB0GaOjLdmokhBi3/oBb/SrjH+hfeGcsiUqMTtP2zLn3nJl7rx26TiwM4zWjzc0vLC5ll3Mrq2vrG/nNrVocJBHjVRa4QdSwrZi7js+rwhEub4QRtzzb5XV7WJbx+g2PYifwr8Qo5G3P6vtOz2GWIKrd8iwxsHvj8qRztN/JF4yioZY+C8wUFJCuSpB/QQtdBGBI4IHDhyDswkJMTxMmDITEtTEmLiLkqDjHBDnSJpTFKcMidkjfPu2aKevTXnrGSs3oFJfeiJQ69kgTUF5EWJ6mq3iinCX7m/dYecq7jehvp14esQIDYv/STTP/q5O1CPRwqmpwqKZQMbI6lrokqivy5vqXqgQ5hMRJ3KV4RJgp5bTPutLEqnbZW0vF31SmZOWepbkJ3uUtacDmz3HOgtph0TSK5sVxoXSWjjqLHezigOZ5ghLOUUGVvK/xiCc8a5farXan3X+maplUs41vS3v4AIAelD8=</latexit> <latexit sha1_base64=\"/4AFNNMB+VGplUU8zEc+J1/3lgk=\">AAAC0HicjVHLTsJAFD3UF+ILdemmkRhdkVZNdElk4xKNPBIgpB0GaOjLdmokhBi3/oBb/SrjH+hfeGcsiUqMTtP2zLn3nJl7rx26TiwM4zWjzc0vLC5ll3Mrq2vrG/nNrVocJBHjVRa4QdSwrZi7js+rwhEub4QRtzzb5XV7WJbx+g2PYifwr8Qo5G3P6vtOz2GWIKrd8iwxsHvj8qRztN/JF4yioZY+C8wUFJCuSpB/QQtdBGBI4IHDhyDswkJMTxMmDITEtTEmLiLkqDjHBDnSJpTFKcMidkjfPu2aKevTXnrGSs3oFJfeiJQ69kgTUF5EWJ6mq3iinCX7m/dYecq7jehvp14esQIDYv/STTP/q5O1CPRwqmpwqKZQMbI6lrokqivy5vqXqgQ5hMRJ3KV4RJgp5bTPutLEqnbZW0vF31SmZOWepbkJ3uUtacDmz3HOgtph0TSK5sVxoXSWjjqLHezigOZ5ghLOUUGVvK/xiCc8a5farXan3X+maplUs41vS3v4AIAelD8=</latexit> Response Generator -= Token Embedding Segment Embedding Position Embedding Utterance 1 Pre-normalization Transformer Block 1 [C] Utterance k-1 [C] Utterance k h 1 k <latexit sha1_base64=\"2hDWUksPO3HucBzuPW+qo2DrXZs=\">AAAC0XicjVHLSsNAFD3GV62vqks3wSK4KokIuiy6cVnRPqAvJum0Dc2LyUQopSBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3PvdWLfS6RlvS4Yi0vLK6u5tfz6xubWdmFnt5ZEqXB51Y38SDQclnDfC3lVetLnjVhwFjg+rzujCxWv33KReFF4I8cxbwdsEHp9z2WSqE4rYHLo9CfDaXfUsbuFolWy9DLngZ2BIrJViQovaKGHCC5SBOAIIQn7YEjoacKGhZi4NibECUKejnNMkSdtSlmcMhixI/oOaNfM2JD2yjPRapdO8ekVpDRxSJqI8gRhdZqp46l2Vuxv3hPtqe42pr+TeQXESgyJ/Us3y/yvTtUi0ceZrsGjmmLNqOrczCXVXVE3N79UJckhJk7hHsUFYVcrZ302tSbRtaveMh1/05mKVXs3y03xrm5JA7Z/jnMe1I5LtlWyr06K5fNs1Dns4wBHNM9TlHGJCqrkLfCIJzwb18bYuDPuP1ONhUyzh2/LePgAgBaVDg==</latexit> <latexit sha1_base64=\"2hDWUksPO3HucBzuPW+qo2DrXZs=\">AAAC0XicjVHLSsNAFD3GV62vqks3wSK4KokIuiy6cVnRPqAvJum0Dc2LyUQopSBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3PvdWLfS6RlvS4Yi0vLK6u5tfz6xubWdmFnt5ZEqXB51Y38SDQclnDfC3lVetLnjVhwFjg+rzujCxWv33KReFF4I8cxbwdsEHp9z2WSqE4rYHLo9CfDaXfUsbuFolWy9DLngZ2BIrJViQovaKGHCC5SBOAIIQn7YEjoacKGhZi4NibECUKejnNMkSdtSlmcMhixI/oOaNfM2JD2yjPRapdO8ekVpDRxSJqI8gRhdZqp46l2Vuxv3hPtqe42pr+TeQXESgyJ/Us3y/yvTtUi0ceZrsGjmmLNqOrczCXVXVE3N79UJckhJk7hHsUFYVcrZ302tSbRtaveMh1/05mKVXs3y03xrm5JA7Z/jnMe1I5LtlWyr06K5fNs1Dns4wBHNM9TlHGJCqrkLfCIJzwb18bYuDPuP1ONhUyzh2/LePgAgBaVDg==</latexit> <latexit sha1_base64=\"2hDWUksPO3HucBzuPW+qo2DrXZs=\">AAAC0XicjVHLSsNAFD3GV62vqks3wSK4KokIuiy6cVnRPqAvJum0Dc2LyUQopSBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3PvdWLfS6RlvS4Yi0vLK6u5tfz6xubWdmFnt5ZEqXB51Y38SDQclnDfC3lVetLnjVhwFjg+rzujCxWv33KReFF4I8cxbwdsEHp9z2WSqE4rYHLo9CfDaXfUsbuFolWy9DLngZ2BIrJViQovaKGHCC5SBOAIIQn7YEjoacKGhZi4NibECUKejnNMkSdtSlmcMhixI/oOaNfM2JD2yjPRapdO8ekVpDRxSJqI8gRhdZqp46l2Vuxv3hPtqe42pr+TeQXESgyJ/Us3y/yvTtUi0ceZrsGjmmLNqOrczCXVXVE3N79UJckhJk7hHsUFYVcrZ302tSbRtaveMh1/05mKVXs3y03xrm5JA7Z/jnMe1I5LtlWyr06K5fNs1Dns4wBHNM9TlHGJCqrkLfCIJzwb18bYuDPuP1ONhUyzh2/LePgAgBaVDg==</latexit> <latexit sha1_base64=\"2hDWUksPO3HucBzuPW+qo2DrXZs=\">AAAC0XicjVHLSsNAFD3GV62vqks3wSK4KokIuiy6cVnRPqAvJum0Dc2LyUQopSBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3PvdWLfS6RlvS4Yi0vLK6u5tfz6xubWdmFnt5ZEqXB51Y38SDQclnDfC3lVetLnjVhwFjg+rzujCxWv33KReFF4I8cxbwdsEHp9z2WSqE4rYHLo9CfDaXfUsbuFolWy9DLngZ2BIrJViQovaKGHCC5SBOAIIQn7YEjoacKGhZi4NibECUKejnNMkSdtSlmcMhixI/oOaNfM2JD2yjPRapdO8ekVpDRxSJqI8gRhdZqp46l2Vuxv3hPtqe42pr+TeQXESgyJ/Us3y/yvTtUi0ceZrsGjmmLNqOrczCXVXVE3N79UJckhJk7hHsUFYVcrZ302tSbRtaveMh1/05mKVXs3y03xrm5JA7Z/jnMe1I5LtlWyr06K5fNs1Dns4wBHNM9TlHGJCqrkLfCIJzwb18bYuDPuP1ONhUyzh2/LePgAgBaVDg==</latexit> h 2 k <latexit sha1_base64=\"BSM8o213O4Met+w0FNA5d+MVlAk=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LLoxmVF+4A+JEmnbWheTCZCKQVx6w+41Z8S/0D/wjvjFHwgOiHJmXPvOTP3XjcJ/FRY1kvOWFhcWl7JrxbW1jc2t4rbO400zrjH6l4cxLzlOikL/IjVhS8C1ko4c0I3YE13fCbjzRvGUz+OrsQkYd3QGUb+wPccQVSvEzpi5A6mo9n1uFe5LpassqWW+RPYGpSgVy0uPqODPmJ4yBCCIYIgHMBBSk8bNiwkxHUxJY4T8lWcYYYCaTPKYpThEDum75B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKk95twn9Xe0VEiswIvYv3TzzvzpZi8AAJ6oGn2pKFCOr87RLproib25+qkqQQ0KcxH2Kc8KeUs77bCpNqmqXvXVU/FVlSlbuPZ2b4U3ekgZsfx/nT9ColG2rbF8claqnetR57GEfhzTPY1Rxjhrq5M3xgEc8GZfGxLg17j5SjZzW7OLLMu7fAYJ2lQ8=</latexit> <latexit sha1_base64=\"BSM8o213O4Met+w0FNA5d+MVlAk=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LLoxmVF+4A+JEmnbWheTCZCKQVx6w+41Z8S/0D/wjvjFHwgOiHJmXPvOTP3XjcJ/FRY1kvOWFhcWl7JrxbW1jc2t4rbO400zrjH6l4cxLzlOikL/IjVhS8C1ko4c0I3YE13fCbjzRvGUz+OrsQkYd3QGUb+wPccQVSvEzpi5A6mo9n1uFe5LpassqWW+RPYGpSgVy0uPqODPmJ4yBCCIYIgHMBBSk8bNiwkxHUxJY4T8lWcYYYCaTPKYpThEDum75B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKk95twn9Xe0VEiswIvYv3TzzvzpZi8AAJ6oGn2pKFCOr87RLproib25+qkqQQ0KcxH2Kc8KeUs77bCpNqmqXvXVU/FVlSlbuPZ2b4U3ekgZsfx/nT9ColG2rbF8claqnetR57GEfhzTPY1Rxjhrq5M3xgEc8GZfGxLg17j5SjZzW7OLLMu7fAYJ2lQ8=</latexit> <latexit sha1_base64=\"BSM8o213O4Met+w0FNA5d+MVlAk=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LLoxmVF+4A+JEmnbWheTCZCKQVx6w+41Z8S/0D/wjvjFHwgOiHJmXPvOTP3XjcJ/FRY1kvOWFhcWl7JrxbW1jc2t4rbO400zrjH6l4cxLzlOikL/IjVhS8C1ko4c0I3YE13fCbjzRvGUz+OrsQkYd3QGUb+wPccQVSvEzpi5A6mo9n1uFe5LpassqWW+RPYGpSgVy0uPqODPmJ4yBCCIYIgHMBBSk8bNiwkxHUxJY4T8lWcYYYCaTPKYpThEDum75B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKk95twn9Xe0VEiswIvYv3TzzvzpZi8AAJ6oGn2pKFCOr87RLproib25+qkqQQ0KcxH2Kc8KeUs77bCpNqmqXvXVU/FVlSlbuPZ2b4U3ekgZsfx/nT9ColG2rbF8claqnetR57GEfhzTPY1Rxjhrq5M3xgEc8GZfGxLg17j5SjZzW7OLLMu7fAYJ2lQ8=</latexit> <latexit sha1_base64=\"BSM8o213O4Met+w0FNA5d+MVlAk=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LLoxmVF+4A+JEmnbWheTCZCKQVx6w+41Z8S/0D/wjvjFHwgOiHJmXPvOTP3XjcJ/FRY1kvOWFhcWl7JrxbW1jc2t4rbO400zrjH6l4cxLzlOikL/IjVhS8C1ko4c0I3YE13fCbjzRvGUz+OrsQkYd3QGUb+wPccQVSvEzpi5A6mo9n1uFe5LpassqWW+RPYGpSgVy0uPqODPmJ4yBCCIYIgHMBBSk8bNiwkxHUxJY4T8lWcYYYCaTPKYpThEDum75B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKk95twn9Xe0VEiswIvYv3TzzvzpZi8AAJ6oGn2pKFCOr87RLproib25+qkqQQ0KcxH2Kc8KeUs77bCpNqmqXvXVU/FVlSlbuPZ2b4U3ekgZsfx/nT9ColG2rbF8claqnetR57GEfhzTPY1Rxjhrq5M3xgEc8GZfGxLg17j5SjZzW7OLLMu7fAYJ2lQ8=</latexit> h 2 k <latexit sha1_base64=\"BSM8o213O4Met+w0FNA5d+MVlAk=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LLoxmVF+4A+JEmnbWheTCZCKQVx6w+41Z8S/0D/wjvjFHwgOiHJmXPvOTP3XjcJ/FRY1kvOWFhcWl7JrxbW1jc2t4rbO400zrjH6l4cxLzlOikL/IjVhS8C1ko4c0I3YE13fCbjzRvGUz+OrsQkYd3QGUb+wPccQVSvEzpi5A6mo9n1uFe5LpassqWW+RPYGpSgVy0uPqODPmJ4yBCCIYIgHMBBSk8bNiwkxHUxJY4T8lWcYYYCaTPKYpThEDum75B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKk95twn9Xe0VEiswIvYv3TzzvzpZi8AAJ6oGn2pKFCOr87RLproib25+qkqQQ0KcxH2Kc8KeUs77bCpNqmqXvXVU/FVlSlbuPZ2b4U3ekgZsfx/nT9ColG2rbF8claqnetR57GEfhzTPY1Rxjhrq5M3xgEc8GZfGxLg17j5SjZzW7OLLMu7fAYJ2lQ8=</latexit> <latexit sha1_base64=\"BSM8o213O4Met+w0FNA5d+MVlAk=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LLoxmVF+4A+JEmnbWheTCZCKQVx6w+41Z8S/0D/wjvjFHwgOiHJmXPvOTP3XjcJ/FRY1kvOWFhcWl7JrxbW1jc2t4rbO400zrjH6l4cxLzlOikL/IjVhS8C1ko4c0I3YE13fCbjzRvGUz+OrsQkYd3QGUb+wPccQVSvEzpi5A6mo9n1uFe5LpassqWW+RPYGpSgVy0uPqODPmJ4yBCCIYIgHMBBSk8bNiwkxHUxJY4T8lWcYYYCaTPKYpThEDum75B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKk95twn9Xe0VEiswIvYv3TzzvzpZi8AAJ6oGn2pKFCOr87RLproib25+qkqQQ0KcxH2Kc8KeUs77bCpNqmqXvXVU/FVlSlbuPZ2b4U3ekgZsfx/nT9ColG2rbF8claqnetR57GEfhzTPY1Rxjhrq5M3xgEc8GZfGxLg17j5SjZzW7OLLMu7fAYJ2lQ8=</latexit> <latexit sha1_base64=\"BSM8o213O4Met+w0FNA5d+MVlAk=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LLoxmVF+4A+JEmnbWheTCZCKQVx6w+41Z8S/0D/wjvjFHwgOiHJmXPvOTP3XjcJ/FRY1kvOWFhcWl7JrxbW1jc2t4rbO400zrjH6l4cxLzlOikL/IjVhS8C1ko4c0I3YE13fCbjzRvGUz+OrsQkYd3QGUb+wPccQVSvEzpi5A6mo9n1uFe5LpassqWW+RPYGpSgVy0uPqODPmJ4yBCCIYIgHMBBSk8bNiwkxHUxJY4T8lWcYYYCaTPKYpThEDum75B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKk95twn9Xe0VEiswIvYv3TzzvzpZi8AAJ6oGn2pKFCOr87RLproib25+qkqQQ0KcxH2Kc8KeUs77bCpNqmqXvXVU/FVlSlbuPZ2b4U3ekgZsfx/nT9ColG2rbF8claqnetR57GEfhzTPY1Rxjhrq5M3xgEc8GZfGxLg17j5SjZzW7OLLMu7fAYJ2lQ8=</latexit> <latexit sha1_base64=\"BSM8o213O4Met+w0FNA5d+MVlAk=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVZIi6LLoxmVF+4A+JEmnbWheTCZCKQVx6w+41Z8S/0D/wjvjFHwgOiHJmXPvOTP3XjcJ/FRY1kvOWFhcWl7JrxbW1jc2t4rbO400zrjH6l4cxLzlOikL/IjVhS8C1ko4c0I3YE13fCbjzRvGUz+OrsQkYd3QGUb+wPccQVSvEzpi5A6mo9n1uFe5LpassqWW+RPYGpSgVy0uPqODPmJ4yBCCIYIgHMBBSk8bNiwkxHUxJY4T8lWcYYYCaTPKYpThEDum75B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKk95twn9Xe0VEiswIvYv3TzzvzpZi8AAJ6oGn2pKFCOr87RLproib25+qkqQQ0KcxH2Kc8KeUs77bCpNqmqXvXVU/FVlSlbuPZ2b4U3ekgZsfx/nT9ColG2rbF8claqnetR57GEfhzTPY1Rxjhrq5M3xgEc8GZfGxLg17j5SjZzW7OLLMu7fAYJ2lQ8=</latexit> u 2 k \u0000 1 <latexit sha1_base64=\"8BT758PbIxiJiVRyaS+2U72IlwA=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjSupYB9Sa0nSaR2aF8lEKKFbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3urHPU2FZrwVjbn5hcam4XFpZXVvfKG9uNdMoSzzW8CI/StqukzKfh6whuPBZO06YE7g+a7mjMxlv3bMk5VF4JcYx6wbOMOQD7jmCqOusl48O7MntYa9csaqWWuYssDWoQK96VH7BDfqI4CFDAIYQgrAPByk9HdiwEBPXRU5cQoirOMMEJdJmlMUowyF2RN8h7TqaDWkvPVOl9ugUn96ElCb2SBNRXkJYnmaqeKacJfubd6485d3G9He1V0CswB2xf+mmmf/VyVoEBjhRNXCqKVaMrM7TLpnqiry5+aUqQQ4xcRL3KZ4Q9pRy2mdTaVJVu+yto+JvKlOycu/p3Azv8pY0YPvnOGdB87BqW1X78qhSO9WjLmIHu9ineR6jhnPU0SDvAI94wrNxYQgjNyafqUZBa7bxbRkPHwKFkno=</latexit> <latexit sha1_base64=\"8BT758PbIxiJiVRyaS+2U72IlwA=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjSupYB9Sa0nSaR2aF8lEKKFbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3urHPU2FZrwVjbn5hcam4XFpZXVvfKG9uNdMoSzzW8CI/StqukzKfh6whuPBZO06YE7g+a7mjMxlv3bMk5VF4JcYx6wbOMOQD7jmCqOusl48O7MntYa9csaqWWuYssDWoQK96VH7BDfqI4CFDAIYQgrAPByk9HdiwEBPXRU5cQoirOMMEJdJmlMUowyF2RN8h7TqaDWkvPVOl9ugUn96ElCb2SBNRXkJYnmaqeKacJfubd6485d3G9He1V0CswB2xf+mmmf/VyVoEBjhRNXCqKVaMrM7TLpnqiry5+aUqQQ4xcRL3KZ4Q9pRy2mdTaVJVu+yto+JvKlOycu/p3Azv8pY0YPvnOGdB87BqW1X78qhSO9WjLmIHu9ineR6jhnPU0SDvAI94wrNxYQgjNyafqUZBa7bxbRkPHwKFkno=</latexit> <latexit sha1_base64=\"8BT758PbIxiJiVRyaS+2U72IlwA=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjSupYB9Sa0nSaR2aF8lEKKFbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3urHPU2FZrwVjbn5hcam4XFpZXVvfKG9uNdMoSzzW8CI/StqukzKfh6whuPBZO06YE7g+a7mjMxlv3bMk5VF4JcYx6wbOMOQD7jmCqOusl48O7MntYa9csaqWWuYssDWoQK96VH7BDfqI4CFDAIYQgrAPByk9HdiwEBPXRU5cQoirOMMEJdJmlMUowyF2RN8h7TqaDWkvPVOl9ugUn96ElCb2SBNRXkJYnmaqeKacJfubd6485d3G9He1V0CswB2xf+mmmf/VyVoEBjhRNXCqKVaMrM7TLpnqiry5+aUqQQ4xcRL3KZ4Q9pRy2mdTaVJVu+yto+JvKlOycu/p3Azv8pY0YPvnOGdB87BqW1X78qhSO9WjLmIHu9ineR6jhnPU0SDvAI94wrNxYQgjNyafqUZBa7bxbRkPHwKFkno=</latexit> <latexit sha1_base64=\"8BT758PbIxiJiVRyaS+2U72IlwA=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjSupYB9Sa0nSaR2aF8lEKKFbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3urHPU2FZrwVjbn5hcam4XFpZXVvfKG9uNdMoSzzW8CI/StqukzKfh6whuPBZO06YE7g+a7mjMxlv3bMk5VF4JcYx6wbOMOQD7jmCqOusl48O7MntYa9csaqWWuYssDWoQK96VH7BDfqI4CFDAIYQgrAPByk9HdiwEBPXRU5cQoirOMMEJdJmlMUowyF2RN8h7TqaDWkvPVOl9ugUn96ElCb2SBNRXkJYnmaqeKacJfubd6485d3G9He1V0CswB2xf+mmmf/VyVoEBjhRNXCqKVaMrM7TLpnqiry5+aUqQQ4xcRL3KZ4Q9pRy2mdTaVJVu+yto+JvKlOycu/p3Azv8pY0YPvnOGdB87BqW1X78qhSO9WjLmIHu9ineR6jhnPU0SDvAI94wrNxYQgjNyafqUZBa7bxbRkPHwKFkno=</latexit> u 1 k \u0000 1 <latexit sha1_base64=\"0FNuh+ZhchJ9NFloTrNE6icYLqM=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjSupYB9Sa0nSaQ3Ni8lEKKFbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3ukngp8KyXgvG3PzC4lJxubSyura+Ud7caqZxxj3W8OIg5m3XSVngR6whfBGwdsKZE7oBa7mjMxlv3TOe+nF0JcYJ64bOMPIHvucIoq6zXj46sCe3dq9csaqWWuYssDWoQK96XH7BDfqI4SFDCIYIgnAAByk9HdiwkBDXRU4cJ+SrOMMEJdJmlMUowyF2RN8h7TqajWgvPVOl9uiUgF5OShN7pIkpjxOWp5kqnilnyf7mnStPebcx/V3tFRIrcEfsX7pp5n91shaBAU5UDT7VlChGVudpl0x1Rd7c/FKVIIeEOIn7FOeEPaWc9tlUmlTVLnvrqPibypSs3Hs6N8O7vCUN2P45zlnQPKzaVtW+PKrUTvWoi9jBLvZpnseo4Rx1NMg7xCOe8GxcGMLIjclnqlHQmm18W8bDBwAlknk=</latexit> <latexit sha1_base64=\"0FNuh+ZhchJ9NFloTrNE6icYLqM=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjSupYB9Sa0nSaQ3Ni8lEKKFbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3ukngp8KyXgvG3PzC4lJxubSyura+Ud7caqZxxj3W8OIg5m3XSVngR6whfBGwdsKZE7oBa7mjMxlv3TOe+nF0JcYJ64bOMPIHvucIoq6zXj46sCe3dq9csaqWWuYssDWoQK96XH7BDfqI4SFDCIYIgnAAByk9HdiwkBDXRU4cJ+SrOMMEJdJmlMUowyF2RN8h7TqajWgvPVOl9uiUgF5OShN7pIkpjxOWp5kqnilnyf7mnStPebcx/V3tFRIrcEfsX7pp5n91shaBAU5UDT7VlChGVudpl0x1Rd7c/FKVIIeEOIn7FOeEPaWc9tlUmlTVLnvrqPibypSs3Hs6N8O7vCUN2P45zlnQPKzaVtW+PKrUTvWoi9jBLvZpnseo4Rx1NMg7xCOe8GxcGMLIjclnqlHQmm18W8bDBwAlknk=</latexit> <latexit sha1_base64=\"0FNuh+ZhchJ9NFloTrNE6icYLqM=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjSupYB9Sa0nSaQ3Ni8lEKKFbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3ukngp8KyXgvG3PzC4lJxubSyura+Ud7caqZxxj3W8OIg5m3XSVngR6whfBGwdsKZE7oBa7mjMxlv3TOe+nF0JcYJ64bOMPIHvucIoq6zXj46sCe3dq9csaqWWuYssDWoQK96XH7BDfqI4SFDCIYIgnAAByk9HdiwkBDXRU4cJ+SrOMMEJdJmlMUowyF2RN8h7TqajWgvPVOl9uiUgF5OShN7pIkpjxOWp5kqnilnyf7mnStPebcx/V3tFRIrcEfsX7pp5n91shaBAU5UDT7VlChGVudpl0x1Rd7c/FKVIIeEOIn7FOeEPaWc9tlUmlTVLnvrqPibypSs3Hs6N8O7vCUN2P45zlnQPKzaVtW+PKrUTvWoi9jBLvZpnseo4Rx1NMg7xCOe8GxcGMLIjclnqlHQmm18W8bDBwAlknk=</latexit> <latexit sha1_base64=\"0FNuh+ZhchJ9NFloTrNE6icYLqM=\">AAACzHicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjSupYB9Sa0nSaQ3Ni8lEKKFbf8Ctfpf4B/oX3hmnoBbRCUnOnHvPmbn3ukngp8KyXgvG3PzC4lJxubSyura+Ud7caqZxxj3W8OIg5m3XSVngR6whfBGwdsKZE7oBa7mjMxlv3TOe+nF0JcYJ64bOMPIHvucIoq6zXj46sCe3dq9csaqWWuYssDWoQK96XH7BDfqI4SFDCIYIgnAAByk9HdiwkBDXRU4cJ+SrOMMEJdJmlMUowyF2RN8h7TqajWgvPVOl9uiUgF5OShN7pIkpjxOWp5kqnilnyf7mnStPebcx/V3tFRIrcEfsX7pp5n91shaBAU5UDT7VlChGVudpl0x1Rd7c/FKVIIeEOIn7FOeEPaWc9tlUmlTVLnvrqPibypSs3Hs6N8O7vCUN2P45zlnQPKzaVtW+PKrUTvWoi9jBLvZpnseo4Rx1NMg7xCOe8GxcGMLIjclnqlHQmm18W8bDBwAlknk=</latexit> u 1 k <latexit sha1_base64=\"ezTQapOyOpLhgdL0OcB4k+rCMxo=\">AAACynicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4L1sLi0vJKcbW0tr6xuVXe3mkmUSoZb7AoiGTb9xIeiJA3lFABb8eSe2M/4C1/dK7jrXsuExGF12oS8+7YG4ZiIJiniGqlvWw0vXV75YpTdcyy54GbgwryVY/KL7hBHxEYUozBEUIRDuAhoacDFw5i4rrIiJOEhIlzTFEib0oqTgqP2BF9h7Tr5GxIe50zMW5GpwT0SnLaOCBPRDpJWJ9mm3hqMmv2t9yZyanvNqG/n+caE6twR+xfvpnyvz5di8IAp6YGQTXFhtHVsTxLarqib25/qUpRhpg4jfsUl4SZcc76bBtPYmrXvfVM/M0oNav3LNemeNe3pAG7P8c5D5pHVdepulfHldpZPuoi9rCPQ5rnCWq4QB0NU+UjnvBsXVrSmljZp9Qq5J5dfFvWwwfQcpIH</latexit> <latexit sha1_base64=\"ezTQapOyOpLhgdL0OcB4k+rCMxo=\">AAACynicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4L1sLi0vJKcbW0tr6xuVXe3mkmUSoZb7AoiGTb9xIeiJA3lFABb8eSe2M/4C1/dK7jrXsuExGF12oS8+7YG4ZiIJiniGqlvWw0vXV75YpTdcyy54GbgwryVY/KL7hBHxEYUozBEUIRDuAhoacDFw5i4rrIiJOEhIlzTFEib0oqTgqP2BF9h7Tr5GxIe50zMW5GpwT0SnLaOCBPRDpJWJ9mm3hqMmv2t9yZyanvNqG/n+caE6twR+xfvpnyvz5di8IAp6YGQTXFhtHVsTxLarqib25/qUpRhpg4jfsUl4SZcc76bBtPYmrXvfVM/M0oNav3LNemeNe3pAG7P8c5D5pHVdepulfHldpZPuoi9rCPQ5rnCWq4QB0NU+UjnvBsXVrSmljZp9Qq5J5dfFvWwwfQcpIH</latexit> <latexit sha1_base64=\"ezTQapOyOpLhgdL0OcB4k+rCMxo=\">AAACynicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4L1sLi0vJKcbW0tr6xuVXe3mkmUSoZb7AoiGTb9xIeiJA3lFABb8eSe2M/4C1/dK7jrXsuExGF12oS8+7YG4ZiIJiniGqlvWw0vXV75YpTdcyy54GbgwryVY/KL7hBHxEYUozBEUIRDuAhoacDFw5i4rrIiJOEhIlzTFEib0oqTgqP2BF9h7Tr5GxIe50zMW5GpwT0SnLaOCBPRDpJWJ9mm3hqMmv2t9yZyanvNqG/n+caE6twR+xfvpnyvz5di8IAp6YGQTXFhtHVsTxLarqib25/qUpRhpg4jfsUl4SZcc76bBtPYmrXvfVM/M0oNav3LNemeNe3pAG7P8c5D5pHVdepulfHldpZPuoi9rCPQ5rnCWq4QB0NU+UjnvBsXVrSmljZp9Qq5J5dfFvWwwfQcpIH</latexit> <latexit sha1_base64=\"ezTQapOyOpLhgdL0OcB4k+rCMxo=\">AAACynicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4L1sLi0vJKcbW0tr6xuVXe3mkmUSoZb7AoiGTb9xIeiJA3lFABb8eSe2M/4C1/dK7jrXsuExGF12oS8+7YG4ZiIJiniGqlvWw0vXV75YpTdcyy54GbgwryVY/KL7hBHxEYUozBEUIRDuAhoacDFw5i4rrIiJOEhIlzTFEib0oqTgqP2BF9h7Tr5GxIe50zMW5GpwT0SnLaOCBPRDpJWJ9mm3hqMmv2t9yZyanvNqG/n+caE6twR+xfvpnyvz5di8IAp6YGQTXFhtHVsTxLarqib25/qUpRhpg4jfsUl4SZcc76bBtPYmrXvfVM/M0oNav3LNemeNe3pAG7P8c5D5pHVdepulfHldpZPuoi9rCPQ5rnCWq4QB0NU+UjnvBsXVrSmljZp9Qq5J5dfFvWwwfQcpIH</latexit> u 2 k <latexit sha1_base64=\"r6h6YTsP2Cjh4gzlxZYLbW3TnHw=\">AAACynicjVHLSsNAFD3GV62vqks3wSK4KkkRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4XrMWl5ZXVwlpxfWNza7u0s9tMolQy3mBREMm27yU8ECFvKKEC3o4l98Z+wFv+6FzHW/dcJiIKr9Uk5t2xNwzFQDBPEdVKe9loelvtlcpOxTHLngduDsrIVz0qveAGfURgSDEGRwhFOICHhJ4OXDiIiesiI04SEibOMUWRvCmpOCk8Ykf0HdKuk7Mh7XXOxLgZnRLQK8lp45A8EekkYX2abeKpyazZ33JnJqe+24T+fp5rTKzCHbF/+WbK//p0LQoDnJoaBNUUG0ZXx/IsqemKvrn9pSpFGWLiNO5TXBJmxjnrs208iald99Yz8Tej1Kzes1yb4l3fkgbs/hznPGhWK65Tca+Oy7WzfNQF7OMARzTPE9RwgToapspHPOHZurSkNbGyT6m1kHv28G1ZDx/S0pII</latexit> <latexit sha1_base64=\"r6h6YTsP2Cjh4gzlxZYLbW3TnHw=\">AAACynicjVHLSsNAFD3GV62vqks3wSK4KkkRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4XrMWl5ZXVwlpxfWNza7u0s9tMolQy3mBREMm27yU8ECFvKKEC3o4l98Z+wFv+6FzHW/dcJiIKr9Uk5t2xNwzFQDBPEdVKe9loelvtlcpOxTHLngduDsrIVz0qveAGfURgSDEGRwhFOICHhJ4OXDiIiesiI04SEibOMUWRvCmpOCk8Ykf0HdKuk7Mh7XXOxLgZnRLQK8lp45A8EekkYX2abeKpyazZ33JnJqe+24T+fp5rTKzCHbF/+WbK//p0LQoDnJoaBNUUG0ZXx/IsqemKvrn9pSpFGWLiNO5TXBJmxjnrs208iald99Yz8Tej1Kzes1yb4l3fkgbs/hznPGhWK65Tca+Oy7WzfNQF7OMARzTPE9RwgToapspHPOHZurSkNbGyT6m1kHv28G1ZDx/S0pII</latexit> <latexit sha1_base64=\"r6h6YTsP2Cjh4gzlxZYLbW3TnHw=\">AAACynicjVHLSsNAFD3GV62vqks3wSK4KkkRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4XrMWl5ZXVwlpxfWNza7u0s9tMolQy3mBREMm27yU8ECFvKKEC3o4l98Z+wFv+6FzHW/dcJiIKr9Uk5t2xNwzFQDBPEdVKe9loelvtlcpOxTHLngduDsrIVz0qveAGfURgSDEGRwhFOICHhJ4OXDiIiesiI04SEibOMUWRvCmpOCk8Ykf0HdKuk7Mh7XXOxLgZnRLQK8lp45A8EekkYX2abeKpyazZ33JnJqe+24T+fp5rTKzCHbF/+WbK//p0LQoDnJoaBNUUG0ZXx/IsqemKvrn9pSpFGWLiNO5TXBJmxjnrs208iald99Yz8Tej1Kzes1yb4l3fkgbs/hznPGhWK65Tca+Oy7WzfNQF7OMARzTPE9RwgToapspHPOHZurSkNbGyT6m1kHv28G1ZDx/S0pII</latexit> <latexit sha1_base64=\"r6h6YTsP2Cjh4gzlxZYLbW3TnHw=\">AAACynicjVHLSsNAFD3GV62vqks3wSK4KkkRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4XrMWl5ZXVwlpxfWNza7u0s9tMolQy3mBREMm27yU8ECFvKKEC3o4l98Z+wFv+6FzHW/dcJiIKr9Uk5t2xNwzFQDBPEdVKe9loelvtlcpOxTHLngduDsrIVz0qveAGfURgSDEGRwhFOICHhJ4OXDiIiesiI04SEibOMUWRvCmpOCk8Ykf0HdKuk7Mh7XXOxLgZnRLQK8lp45A8EekkYX2abeKpyazZ33JnJqe+24T+fp5rTKzCHbF/+WbK//p0LQoDnJoaBNUUG0ZXx/IsqemKvrn9pSpFGWLiNO5TXBJmxjnrs208iald99Yz8Tej1Kzes1yb4l3fkgbs/hznPGhWK65Tca+Oy7WzfNQF7OMARzTPE9RwgToapspHPOHZurSkNbGyT6m1kHv28G1ZDx/S0pII</latexit> C k +1 <latexit sha1_base64=\"aOJPCuP2SKS5N5+jXxP8AHPUq50=\">AAAC1XicjVHLSsNAFD2Nr1pfUZdugkUQhJKIoMtiNy4r2Ae0pSTptIbmRTIplJKduPUH3OoviX+gf+GdcQpqEZ2Q5My595yZe68T+17KTfO1oC0tr6yuFddLG5tb2zv67l4zjbLEZQ038qOk7dgp872QNbjHfdaOE2YHjs9azrgm4q0JS1IvCm/4NGa9wB6F3tBzbU5UX9e7gc1vneGslvdn4xMr7+tls2LKZSwCS4Ey1KpH+gu6GCCCiwwBGEJwwj5spPR0YMFETFwPM+ISQp6MM+QokTajLEYZNrFj+o5o11FsSHvhmUq1S6f49CakNHBEmojyEsLiNEPGM+ks2N+8Z9JT3G1Kf0d5BcRy3BL7l26e+V+dqIVjiAtZg0c1xZIR1bnKJZNdETc3vlTFySEmTuABxRPCrlTO+2xITSprF721ZfxNZgpW7F2Vm+Fd3JIGbP0c5yJonlYss2Jdn5Wrl2rURRzgEMc0z3NUcYU6GuQ9wSOe8Ky1tFy70+4/U7WC0uzj29IePgDk+ZXz</latexit> <latexit sha1_base64=\"aOJPCuP2SKS5N5+jXxP8AHPUq50=\">AAAC1XicjVHLSsNAFD2Nr1pfUZdugkUQhJKIoMtiNy4r2Ae0pSTptIbmRTIplJKduPUH3OoviX+gf+GdcQpqEZ2Q5My595yZe68T+17KTfO1oC0tr6yuFddLG5tb2zv67l4zjbLEZQ038qOk7dgp872QNbjHfdaOE2YHjs9azrgm4q0JS1IvCm/4NGa9wB6F3tBzbU5UX9e7gc1vneGslvdn4xMr7+tls2LKZSwCS4Ey1KpH+gu6GCCCiwwBGEJwwj5spPR0YMFETFwPM+ISQp6MM+QokTajLEYZNrFj+o5o11FsSHvhmUq1S6f49CakNHBEmojyEsLiNEPGM+ks2N+8Z9JT3G1Kf0d5BcRy3BL7l26e+V+dqIVjiAtZg0c1xZIR1bnKJZNdETc3vlTFySEmTuABxRPCrlTO+2xITSprF721ZfxNZgpW7F2Vm+Fd3JIGbP0c5yJonlYss2Jdn5Wrl2rURRzgEMc0z3NUcYU6GuQ9wSOe8Ky1tFy70+4/U7WC0uzj29IePgDk+ZXz</latexit> <latexit sha1_base64=\"aOJPCuP2SKS5N5+jXxP8AHPUq50=\">AAAC1XicjVHLSsNAFD2Nr1pfUZdugkUQhJKIoMtiNy4r2Ae0pSTptIbmRTIplJKduPUH3OoviX+gf+GdcQpqEZ2Q5My595yZe68T+17KTfO1oC0tr6yuFddLG5tb2zv67l4zjbLEZQ038qOk7dgp872QNbjHfdaOE2YHjs9azrgm4q0JS1IvCm/4NGa9wB6F3tBzbU5UX9e7gc1vneGslvdn4xMr7+tls2LKZSwCS4Ey1KpH+gu6GCCCiwwBGEJwwj5spPR0YMFETFwPM+ISQp6MM+QokTajLEYZNrFj+o5o11FsSHvhmUq1S6f49CakNHBEmojyEsLiNEPGM+ks2N+8Z9JT3G1Kf0d5BcRy3BL7l26e+V+dqIVjiAtZg0c1xZIR1bnKJZNdETc3vlTFySEmTuABxRPCrlTO+2xITSprF721ZfxNZgpW7F2Vm+Fd3JIGbP0c5yJonlYss2Jdn5Wrl2rURRzgEMc0z3NUcYU6GuQ9wSOe8Ky1tFy70+4/U7WC0uzj29IePgDk+ZXz</latexit> <latexit sha1_base64=\"aOJPCuP2SKS5N5+jXxP8AHPUq50=\">AAAC1XicjVHLSsNAFD2Nr1pfUZdugkUQhJKIoMtiNy4r2Ae0pSTptIbmRTIplJKduPUH3OoviX+gf+GdcQpqEZ2Q5My595yZe68T+17KTfO1oC0tr6yuFddLG5tb2zv67l4zjbLEZQ038qOk7dgp872QNbjHfdaOE2YHjs9azrgm4q0JS1IvCm/4NGa9wB6F3tBzbU5UX9e7gc1vneGslvdn4xMr7+tls2LKZSwCS4Ey1KpH+gu6GCCCiwwBGEJwwj5spPR0YMFETFwPM+ISQp6MM+QokTajLEYZNrFj+o5o11FsSHvhmUq1S6f49CakNHBEmojyEsLiNEPGM+ks2N+8Z9JT3G1Kf0d5BcRy3BL7l26e+V+dqIVjiAtZg0c1xZIR1bnKJZNdETc3vlTFySEmTuABxRPCrlTO+2xITSprF721ZfxNZgpW7F2Vm+Fd3JIGbP0c5yJonlYss2Jdn5Wrl2rURRzgEMc0z3NUcYU6GuQ9wSOe8Ky1tFy70+4/U7WC0uzj29IePgDk+ZXz</latexit> C k <latexit sha1_base64=\"R7tqcvCgMhrKmzYDK0B19C9ArQg=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFnsxmVF+4BWS5JO29C8mEyEUgri1h9wqz8l/oH+hXfGKahFdEKSM+fec2buvW4S+KmwrNecsbC4tLySXy2srW9sbhW3dxppnHGP1b04iHnLdVIW+BGrC18ErJVw5oRuwJruqCrjzVvGUz+OrsQ4YdehM4j8vu85gqibTuiIodufVKfdyWjaLZassqWWOQ9sDUrQqxYXX9BBDzE8ZAjBEEEQDuAgpacNGxYS4q4xIY4T8lWcYYoCaTPKYpThEDui74B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKE95tzH9Xe0VEiswJPYv3SzzvzpZi0Afp6oGn2pKFCOr87RLproib25+qUqQQ0KcxD2Kc8KeUs76bCpNqmqXvXVU/E1lSlbuPZ2b4V3ekgZs/xznPGgclW2rbF8clypnetR57GEfhzTPE1Rwjhrq5M3xiCc8G5fG2Lgz7j9TjZzW7OLbMh4+ACEZlVI=</latexit> <latexit sha1_base64=\"R7tqcvCgMhrKmzYDK0B19C9ArQg=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFnsxmVF+4BWS5JO29C8mEyEUgri1h9wqz8l/oH+hXfGKahFdEKSM+fec2buvW4S+KmwrNecsbC4tLySXy2srW9sbhW3dxppnHGP1b04iHnLdVIW+BGrC18ErJVw5oRuwJruqCrjzVvGUz+OrsQ4YdehM4j8vu85gqibTuiIodufVKfdyWjaLZassqWWOQ9sDUrQqxYXX9BBDzE8ZAjBEEEQDuAgpacNGxYS4q4xIY4T8lWcYYoCaTPKYpThEDui74B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKE95tzH9Xe0VEiswJPYv3SzzvzpZi0Afp6oGn2pKFCOr87RLproib25+qUqQQ0KcxD2Kc8KeUs76bCpNqmqXvXVU/E1lSlbuPZ2b4V3ekgZs/xznPGgclW2rbF8clypnetR57GEfhzTPE1Rwjhrq5M3xiCc8G5fG2Lgz7j9TjZzW7OLbMh4+ACEZlVI=</latexit> <latexit sha1_base64=\"R7tqcvCgMhrKmzYDK0B19C9ArQg=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFnsxmVF+4BWS5JO29C8mEyEUgri1h9wqz8l/oH+hXfGKahFdEKSM+fec2buvW4S+KmwrNecsbC4tLySXy2srW9sbhW3dxppnHGP1b04iHnLdVIW+BGrC18ErJVw5oRuwJruqCrjzVvGUz+OrsQ4YdehM4j8vu85gqibTuiIodufVKfdyWjaLZassqWWOQ9sDUrQqxYXX9BBDzE8ZAjBEEEQDuAgpacNGxYS4q4xIY4T8lWcYYoCaTPKYpThEDui74B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKE95tzH9Xe0VEiswJPYv3SzzvzpZi0Afp6oGn2pKFCOr87RLproib25+qUqQQ0KcxD2Kc8KeUs76bCpNqmqXvXVU/E1lSlbuPZ2b4V3ekgZs/xznPGgclW2rbF8clypnetR57GEfhzTPE1Rwjhrq5M3xiCc8G5fG2Lgz7j9TjZzW7OLbMh4+ACEZlVI=</latexit> <latexit sha1_base64=\"R7tqcvCgMhrKmzYDK0B19C9ArQg=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFnsxmVF+4BWS5JO29C8mEyEUgri1h9wqz8l/oH+hXfGKahFdEKSM+fec2buvW4S+KmwrNecsbC4tLySXy2srW9sbhW3dxppnHGP1b04iHnLdVIW+BGrC18ErJVw5oRuwJruqCrjzVvGUz+OrsQ4YdehM4j8vu85gqibTuiIodufVKfdyWjaLZassqWWOQ9sDUrQqxYXX9BBDzE8ZAjBEEEQDuAgpacNGxYS4q4xIY4T8lWcYYoCaTPKYpThEDui74B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKE95tzH9Xe0VEiswJPYv3SzzvzpZi0Afp6oGn2pKFCOr87RLproib25+qUqQQ0KcxD2Kc8KeUs76bCpNqmqXvXVU/E1lSlbuPZ2b4V3ekgZs/xznPGgclW2rbF8clypnetR57GEfhzTPE1Rwjhrq5M3xiCc8G5fG2Lgz7j9TjZzW7OLbMh4+ACEZlVI=</latexit> h 2 k \u0000 1 <latexit sha1_base64=\"BOWbRVkosmkAT6wz4iD6WvBcc3c=\">AAAC13icjVHLSsNAFD2Nr/qOdekmWAQ3lqQIuiy6cVnBPqStJUmnbWheJBOxhOJO3PoDbvWPxD/Qv/DOmIJaRCckOXPuPWfm3muFrhNzXX/NKXPzC4tL+eWV1bX1jU11q1CPgySyWc0O3CBqWmbMXMdnNe5wlzXDiJme5bKGNToV8cY1i2In8C/4OGQdzxz4Tt+xTU5UVy20PZMPrX46nHTT0YExuSp31aJe0uXSZoGRgSKyVQ3UF7TRQwAbCTww+OCEXZiI6WnBgI6QuA5S4iJCjowzTLBC2oSyGGWYxI7oO6BdK2N92gvPWKptOsWlNyKlhj3SBJQXERanaTKeSGfB/uadSk9xtzH9rczLI5ZjSOxfumnmf3WiFo4+jmUNDtUUSkZUZ2cuieyKuLn2pSpODiFxAvcoHhG2pXLaZ01qYlm76K0p428yU7Bib2e5Cd7FLWnAxs9xzoJ6uWToJeP8sFg5yUadxw52sU/zPEIFZ6iiRt43eMQTnpVL5Va5U+4/U5VcptnGt6U8fADxtJa+</latexit> <latexit sha1_base64=\"BOWbRVkosmkAT6wz4iD6WvBcc3c=\">AAAC13icjVHLSsNAFD2Nr/qOdekmWAQ3lqQIuiy6cVnBPqStJUmnbWheJBOxhOJO3PoDbvWPxD/Qv/DOmIJaRCckOXPuPWfm3muFrhNzXX/NKXPzC4tL+eWV1bX1jU11q1CPgySyWc0O3CBqWmbMXMdnNe5wlzXDiJme5bKGNToV8cY1i2In8C/4OGQdzxz4Tt+xTU5UVy20PZMPrX46nHTT0YExuSp31aJe0uXSZoGRgSKyVQ3UF7TRQwAbCTww+OCEXZiI6WnBgI6QuA5S4iJCjowzTLBC2oSyGGWYxI7oO6BdK2N92gvPWKptOsWlNyKlhj3SBJQXERanaTKeSGfB/uadSk9xtzH9rczLI5ZjSOxfumnmf3WiFo4+jmUNDtUUSkZUZ2cuieyKuLn2pSpODiFxAvcoHhG2pXLaZ01qYlm76K0p428yU7Bib2e5Cd7FLWnAxs9xzoJ6uWToJeP8sFg5yUadxw52sU/zPEIFZ6iiRt43eMQTnpVL5Va5U+4/U5VcptnGt6U8fADxtJa+</latexit> <latexit sha1_base64=\"BOWbRVkosmkAT6wz4iD6WvBcc3c=\">AAAC13icjVHLSsNAFD2Nr/qOdekmWAQ3lqQIuiy6cVnBPqStJUmnbWheJBOxhOJO3PoDbvWPxD/Qv/DOmIJaRCckOXPuPWfm3muFrhNzXX/NKXPzC4tL+eWV1bX1jU11q1CPgySyWc0O3CBqWmbMXMdnNe5wlzXDiJme5bKGNToV8cY1i2In8C/4OGQdzxz4Tt+xTU5UVy20PZMPrX46nHTT0YExuSp31aJe0uXSZoGRgSKyVQ3UF7TRQwAbCTww+OCEXZiI6WnBgI6QuA5S4iJCjowzTLBC2oSyGGWYxI7oO6BdK2N92gvPWKptOsWlNyKlhj3SBJQXERanaTKeSGfB/uadSk9xtzH9rczLI5ZjSOxfumnmf3WiFo4+jmUNDtUUSkZUZ2cuieyKuLn2pSpODiFxAvcoHhG2pXLaZ01qYlm76K0p428yU7Bib2e5Cd7FLWnAxs9xzoJ6uWToJeP8sFg5yUadxw52sU/zPEIFZ6iiRt43eMQTnpVL5Va5U+4/U5VcptnGt6U8fADxtJa+</latexit> <latexit sha1_base64=\"BOWbRVkosmkAT6wz4iD6WvBcc3c=\">AAAC13icjVHLSsNAFD2Nr/qOdekmWAQ3lqQIuiy6cVnBPqStJUmnbWheJBOxhOJO3PoDbvWPxD/Qv/DOmIJaRCckOXPuPWfm3muFrhNzXX/NKXPzC4tL+eWV1bX1jU11q1CPgySyWc0O3CBqWmbMXMdnNe5wlzXDiJme5bKGNToV8cY1i2In8C/4OGQdzxz4Tt+xTU5UVy20PZMPrX46nHTT0YExuSp31aJe0uXSZoGRgSKyVQ3UF7TRQwAbCTww+OCEXZiI6WnBgI6QuA5S4iJCjowzTLBC2oSyGGWYxI7oO6BdK2N92gvPWKptOsWlNyKlhj3SBJQXERanaTKeSGfB/uadSk9xtzH9rczLI5ZjSOxfumnmf3WiFo4+jmUNDtUUSkZUZ2cuieyKuLn2pSpODiFxAvcoHhG2pXLaZ01qYlm76K0p428yU7Bib2e5Cd7FLWnAxs9xzoJ6uWToJeP8sFg5yUadxw52sU/zPEIFZ6iiRt43eMQTnpVL5Va5U+4/U5VcptnGt6U8fADxtJa+</latexit> h 1 k \u0000 1 <latexit sha1_base64=\"ek0HzBtIiatHlvpcNURaxJ/vJw0=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBIvgxpKIoMuiG5cV7EPaWpJ02obmRTIRSyjuxK0/4Fb/SPwD/QvvjCmoRXRCkjPn3nNm7r1W6Dox1/XXnDI3v7C4lF8urKyurW+om8V6HCSRzWp24AZR0zJj5jo+q3GHu6wZRsz0LJc1rNGpiDeuWRQ7gX/BxyHreObAd/qObXKiumqx7Zl8aPXT4aSbjvaNyZXRVUt6WZdLmwVGBkrIVjVQX9BGDwFsJPDA4IMTdmEipqcFAzpC4jpIiYsIOTLOMEGBtAllMcowiR3Rd0C7Vsb6tBeesVTbdIpLb0RKDbukCSgvIixO02Q8kc6C/c07lZ7ibmP6W5mXRyzHkNi/dNPM/+pELRx9HMsaHKoplIyozs5cEtkVcXPtS1WcHELiBO5RPCJsS+W0z5rUxLJ20VtTxt9kpmDF3s5yE7yLW9KAjZ/jnAX1g7Khl43zw1LlJBt1HtvYwR7N8wgVnKGKGnnf4BFPeFYulVvlTrn/TFVymWYL35by8AHvVJa9</latexit> <latexit sha1_base64=\"ek0HzBtIiatHlvpcNURaxJ/vJw0=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBIvgxpKIoMuiG5cV7EPaWpJ02obmRTIRSyjuxK0/4Fb/SPwD/QvvjCmoRXRCkjPn3nNm7r1W6Dox1/XXnDI3v7C4lF8urKyurW+om8V6HCSRzWp24AZR0zJj5jo+q3GHu6wZRsz0LJc1rNGpiDeuWRQ7gX/BxyHreObAd/qObXKiumqx7Zl8aPXT4aSbjvaNyZXRVUt6WZdLmwVGBkrIVjVQX9BGDwFsJPDA4IMTdmEipqcFAzpC4jpIiYsIOTLOMEGBtAllMcowiR3Rd0C7Vsb6tBeesVTbdIpLb0RKDbukCSgvIixO02Q8kc6C/c07lZ7ibmP6W5mXRyzHkNi/dNPM/+pELRx9HMsaHKoplIyozs5cEtkVcXPtS1WcHELiBO5RPCJsS+W0z5rUxLJ20VtTxt9kpmDF3s5yE7yLW9KAjZ/jnAX1g7Khl43zw1LlJBt1HtvYwR7N8wgVnKGKGnnf4BFPeFYulVvlTrn/TFVymWYL35by8AHvVJa9</latexit> <latexit sha1_base64=\"ek0HzBtIiatHlvpcNURaxJ/vJw0=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBIvgxpKIoMuiG5cV7EPaWpJ02obmRTIRSyjuxK0/4Fb/SPwD/QvvjCmoRXRCkjPn3nNm7r1W6Dox1/XXnDI3v7C4lF8urKyurW+om8V6HCSRzWp24AZR0zJj5jo+q3GHu6wZRsz0LJc1rNGpiDeuWRQ7gX/BxyHreObAd/qObXKiumqx7Zl8aPXT4aSbjvaNyZXRVUt6WZdLmwVGBkrIVjVQX9BGDwFsJPDA4IMTdmEipqcFAzpC4jpIiYsIOTLOMEGBtAllMcowiR3Rd0C7Vsb6tBeesVTbdIpLb0RKDbukCSgvIixO02Q8kc6C/c07lZ7ibmP6W5mXRyzHkNi/dNPM/+pELRx9HMsaHKoplIyozs5cEtkVcXPtS1WcHELiBO5RPCJsS+W0z5rUxLJ20VtTxt9kpmDF3s5yE7yLW9KAjZ/jnAX1g7Khl43zw1LlJBt1HtvYwR7N8wgVnKGKGnnf4BFPeFYulVvlTrn/TFVymWYL35by8AHvVJa9</latexit> <latexit sha1_base64=\"ek0HzBtIiatHlvpcNURaxJ/vJw0=\">AAAC13icjVHLSsNAFD2Nr1pfsS7dBIvgxpKIoMuiG5cV7EPaWpJ02obmRTIRSyjuxK0/4Fb/SPwD/QvvjCmoRXRCkjPn3nNm7r1W6Dox1/XXnDI3v7C4lF8urKyurW+om8V6HCSRzWp24AZR0zJj5jo+q3GHu6wZRsz0LJc1rNGpiDeuWRQ7gX/BxyHreObAd/qObXKiumqx7Zl8aPXT4aSbjvaNyZXRVUt6WZdLmwVGBkrIVjVQX9BGDwFsJPDA4IMTdmEipqcFAzpC4jpIiYsIOTLOMEGBtAllMcowiR3Rd0C7Vsb6tBeesVTbdIpLb0RKDbukCSgvIixO02Q8kc6C/c07lZ7ibmP6W5mXRyzHkNi/dNPM/+pELRx9HMsaHKoplIyozs5cEtkVcXPtS1WcHELiBO5RPCJsS+W0z5rUxLJ20VtTxt9kpmDF3s5yE7yLW9KAjZ/jnAX1g7Khl43zw1LlJBt1HtvYwR7N8wgVnKGKGnnf4BFPeFYulVvlTrn/TFVymWYL35by8AHvVJa9</latexit> I 0 k <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> I 0 k <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> I 0 k <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> <latexit sha1_base64=\"FXWNdodQi2dOqEdCfEM1yEEiYo0=\">AAAC1HicjVHLSsNAFD2Nr1ofjbp0Eyyiq5KIoMuiG91VsA9oS0nSaRuaF5OJUGpX4tYfcKvfJP6B/oV3xhTUIjohyZlzz7kz914n9r1EmOZrTltYXFpeya8W1tY3Nov61nY9iVLuspob+RFvOnbCfC9kNeEJnzVjzuzA8VnDGZ3LeOOG8cSLwmsxjlknsAeh1/dcWxDV1YvtwBZDpz+5nHYno+lBVy+ZZVMtYx5YGSghW9VIf0EbPURwkSIAQwhB2IeNhJ4WLJiIietgQhwn5Kk4wxQF8qakYqSwiR3Rd0C7VsaGtJc5E+V26RSfXk5OA/vkiUjHCcvTDBVPVWbJ/pZ7onLKu43p72S5AmIFhsT+5Zsp/+uTtQj0capq8KimWDGyOjfLkqquyJsbX6oSlCEmTuIexTlhVzlnfTaUJ1G1y97aKv6mlJKVezfTpniXt6QBWz/HOQ/qR2XLLFtXx6XKWTbqPHaxh0Oa5wkquEAVNTXzRzzhWatrt9qddv8p1XKZZwfflvbwAUoqlbo=</latexit> PE+ h 1 k <latexit sha1_base64=\"2hDWUksPO3HucBzuPW+qo2DrXZs=\">AAAC0XicjVHLSsNAFD3GV62vqks3wSK4KokIuiy6cVnRPqAvJum0Dc2LyUQopSBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3PvdWLfS6RlvS4Yi0vLK6u5tfz6xubWdmFnt5ZEqXB51Y38SDQclnDfC3lVetLnjVhwFjg+rzujCxWv33KReFF4I8cxbwdsEHp9z2WSqE4rYHLo9CfDaXfUsbuFolWy9DLngZ2BIrJViQovaKGHCC5SBOAIIQn7YEjoacKGhZi4NibECUKejnNMkSdtSlmcMhixI/oOaNfM2JD2yjPRapdO8ekVpDRxSJqI8gRhdZqp46l2Vuxv3hPtqe42pr+TeQXESgyJ/Us3y/yvTtUi0ceZrsGjmmLNqOrczCXVXVE3N79UJckhJk7hHsUFYVcrZ302tSbRtaveMh1/05mKVXs3y03xrm5JA7Z/jnMe1I5LtlWyr06K5fNs1Dns4wBHNM9TlHGJCqrkLfCIJzwb18bYuDPuP1ONhUyzh2/LePgAgBaVDg==</latexit> <latexit sha1_base64=\"2hDWUksPO3HucBzuPW+qo2DrXZs=\">AAAC0XicjVHLSsNAFD3GV62vqks3wSK4KokIuiy6cVnRPqAvJum0Dc2LyUQopSBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3PvdWLfS6RlvS4Yi0vLK6u5tfz6xubWdmFnt5ZEqXB51Y38SDQclnDfC3lVetLnjVhwFjg+rzujCxWv33KReFF4I8cxbwdsEHp9z2WSqE4rYHLo9CfDaXfUsbuFolWy9DLngZ2BIrJViQovaKGHCC5SBOAIIQn7YEjoacKGhZi4NibECUKejnNMkSdtSlmcMhixI/oOaNfM2JD2yjPRapdO8ekVpDRxSJqI8gRhdZqp46l2Vuxv3hPtqe42pr+TeQXESgyJ/Us3y/yvTtUi0ceZrsGjmmLNqOrczCXVXVE3N79UJckhJk7hHsUFYVcrZ302tSbRtaveMh1/05mKVXs3y03xrm5JA7Z/jnMe1I5LtlWyr06K5fNs1Dns4wBHNM9TlHGJCqrkLfCIJzwb18bYuDPuP1ONhUyzh2/LePgAgBaVDg==</latexit> <latexit sha1_base64=\"2hDWUksPO3HucBzuPW+qo2DrXZs=\">AAAC0XicjVHLSsNAFD3GV62vqks3wSK4KokIuiy6cVnRPqAvJum0Dc2LyUQopSBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3PvdWLfS6RlvS4Yi0vLK6u5tfz6xubWdmFnt5ZEqXB51Y38SDQclnDfC3lVetLnjVhwFjg+rzujCxWv33KReFF4I8cxbwdsEHp9z2WSqE4rYHLo9CfDaXfUsbuFolWy9DLngZ2BIrJViQovaKGHCC5SBOAIIQn7YEjoacKGhZi4NibECUKejnNMkSdtSlmcMhixI/oOaNfM2JD2yjPRapdO8ekVpDRxSJqI8gRhdZqp46l2Vuxv3hPtqe42pr+TeQXESgyJ/Us3y/yvTtUi0ceZrsGjmmLNqOrczCXVXVE3N79UJckhJk7hHsUFYVcrZ302tSbRtaveMh1/05mKVXs3y03xrm5JA7Z/jnMe1I5LtlWyr06K5fNs1Dns4wBHNM9TlHGJCqrkLfCIJzwb18bYuDPuP1ONhUyzh2/LePgAgBaVDg==</latexit> <latexit sha1_base64=\"2hDWUksPO3HucBzuPW+qo2DrXZs=\">AAAC0XicjVHLSsNAFD3GV62vqks3wSK4KokIuiy6cVnRPqAvJum0Dc2LyUQopSBu/QG3+lPiH+hfeGdMQS2iE5KcOfeeM3PvdWLfS6RlvS4Yi0vLK6u5tfz6xubWdmFnt5ZEqXB51Y38SDQclnDfC3lVetLnjVhwFjg+rzujCxWv33KReFF4I8cxbwdsEHp9z2WSqE4rYHLo9CfDaXfUsbuFolWy9DLngZ2BIrJViQovaKGHCC5SBOAIIQn7YEjoacKGhZi4NibECUKejnNMkSdtSlmcMhixI/oOaNfM2JD2yjPRapdO8ekVpDRxSJqI8gRhdZqp46l2Vuxv3hPtqe42pr+TeQXESgyJ/Us3y/yvTtUi0ceZrsGjmmLNqOrczCXVXVE3N79UJckhJk7hHsUFYVcrZ302tSbRtaveMh1/05mKVXs3y03xrm5JA7Z/jnMe1I5LtlWyr06K5fNs1Dns4wBHNM9TlHGJCqrkLfCIJzwb18bYuDPuP1ONhUyzh2/LePgAgBaVDg==</latexit> C k <latexit sha1_base64=\"R7tqcvCgMhrKmzYDK0B19C9ArQg=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFnsxmVF+4BWS5JO29C8mEyEUgri1h9wqz8l/oH+hXfGKahFdEKSM+fec2buvW4S+KmwrNecsbC4tLySXy2srW9sbhW3dxppnHGP1b04iHnLdVIW+BGrC18ErJVw5oRuwJruqCrjzVvGUz+OrsQ4YdehM4j8vu85gqibTuiIodufVKfdyWjaLZassqWWOQ9sDUrQqxYXX9BBDzE8ZAjBEEEQDuAgpacNGxYS4q4xIY4T8lWcYYoCaTPKYpThEDui74B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKE95tzH9Xe0VEiswJPYv3SzzvzpZi0Afp6oGn2pKFCOr87RLproib25+qUqQQ0KcxD2Kc8KeUs76bCpNqmqXvXVU/E1lSlbuPZ2b4V3ekgZs/xznPGgclW2rbF8clypnetR57GEfhzTPE1Rwjhrq5M3xiCc8G5fG2Lgz7j9TjZzW7OLbMh4+ACEZlVI=</latexit> <latexit sha1_base64=\"R7tqcvCgMhrKmzYDK0B19C9ArQg=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFnsxmVF+4BWS5JO29C8mEyEUgri1h9wqz8l/oH+hXfGKahFdEKSM+fec2buvW4S+KmwrNecsbC4tLySXy2srW9sbhW3dxppnHGP1b04iHnLdVIW+BGrC18ErJVw5oRuwJruqCrjzVvGUz+OrsQ4YdehM4j8vu85gqibTuiIodufVKfdyWjaLZassqWWOQ9sDUrQqxYXX9BBDzE8ZAjBEEEQDuAgpacNGxYS4q4xIY4T8lWcYYoCaTPKYpThEDui74B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKE95tzH9Xe0VEiswJPYv3SzzvzpZi0Afp6oGn2pKFCOr87RLproib25+qUqQQ0KcxD2Kc8KeUs76bCpNqmqXvXVU/E1lSlbuPZ2b4V3ekgZs/xznPGgclW2rbF8clypnetR57GEfhzTPE1Rwjhrq5M3xiCc8G5fG2Lgz7j9TjZzW7OLbMh4+ACEZlVI=</latexit> <latexit sha1_base64=\"R7tqcvCgMhrKmzYDK0B19C9ArQg=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFnsxmVF+4BWS5JO29C8mEyEUgri1h9wqz8l/oH+hXfGKahFdEKSM+fec2buvW4S+KmwrNecsbC4tLySXy2srW9sbhW3dxppnHGP1b04iHnLdVIW+BGrC18ErJVw5oRuwJruqCrjzVvGUz+OrsQ4YdehM4j8vu85gqibTuiIodufVKfdyWjaLZassqWWOQ9sDUrQqxYXX9BBDzE8ZAjBEEEQDuAgpacNGxYS4q4xIY4T8lWcYYoCaTPKYpThEDui74B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKE95tzH9Xe0VEiswJPYv3SzzvzpZi0Afp6oGn2pKFCOr87RLproib25+qUqQQ0KcxD2Kc8KeUs76bCpNqmqXvXVU/E1lSlbuPZ2b4V3ekgZs/xznPGgclW2rbF8clypnetR57GEfhzTPE1Rwjhrq5M3xiCc8G5fG2Lgz7j9TjZzW7OLbMh4+ACEZlVI=</latexit> <latexit sha1_base64=\"R7tqcvCgMhrKmzYDK0B19C9ArQg=\">AAAC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFnsxmVF+4BWS5JO29C8mEyEUgri1h9wqz8l/oH+hXfGKahFdEKSM+fec2buvW4S+KmwrNecsbC4tLySXy2srW9sbhW3dxppnHGP1b04iHnLdVIW+BGrC18ErJVw5oRuwJruqCrjzVvGUz+OrsQ4YdehM4j8vu85gqibTuiIodufVKfdyWjaLZassqWWOQ9sDUrQqxYXX9BBDzE8ZAjBEEEQDuAgpacNGxYS4q4xIY4T8lWcYYoCaTPKYpThEDui74B2bc1GtJeeqVJ7dEpALyeliQPSxJTHCcvTTBXPlLNkf/OeKE95tzH9Xe0VEiswJPYv3SzzvzpZi0Afp6oGn2pKFCOr87RLproib25+qUqQQ0KcxD2Kc8KeUs76bCpNqmqXvXVU/E1lSlbuPZ2b4V3ekgZs/xznPGgclW2rbF8clypnetR57GEfhzTPE1Rwjhrq5M3xiCc8G5fG2Lgz7j9TjZzW7OLbMh4+ACEZlVI=</latexit> u 2 k <latexit sha1_base64=\"r6h6YTsP2Cjh4gzlxZYLbW3TnHw=\">AAACynicjVHLSsNAFD3GV62vqks3wSK4KkkRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4XrMWl5ZXVwlpxfWNza7u0s9tMolQy3mBREMm27yU8ECFvKKEC3o4l98Z+wFv+6FzHW/dcJiIKr9Uk5t2xNwzFQDBPEdVKe9loelvtlcpOxTHLngduDsrIVz0qveAGfURgSDEGRwhFOICHhJ4OXDiIiesiI04SEibOMUWRvCmpOCk8Ykf0HdKuk7Mh7XXOxLgZnRLQK8lp45A8EekkYX2abeKpyazZ33JnJqe+24T+fp5rTKzCHbF/+WbK//p0LQoDnJoaBNUUG0ZXx/IsqemKvrn9pSpFGWLiNO5TXBJmxjnrs208iald99Yz8Tej1Kzes1yb4l3fkgbs/hznPGhWK65Tca+Oy7WzfNQF7OMARzTPE9RwgToapspHPOHZurSkNbGyT6m1kHv28G1ZDx/S0pII</latexit> <latexit sha1_base64=\"r6h6YTsP2Cjh4gzlxZYLbW3TnHw=\">AAACynicjVHLSsNAFD3GV62vqks3wSK4KkkRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4XrMWl5ZXVwlpxfWNza7u0s9tMolQy3mBREMm27yU8ECFvKKEC3o4l98Z+wFv+6FzHW/dcJiIKr9Uk5t2xNwzFQDBPEdVKe9loelvtlcpOxTHLngduDsrIVz0qveAGfURgSDEGRwhFOICHhJ4OXDiIiesiI04SEibOMUWRvCmpOCk8Ykf0HdKuk7Mh7XXOxLgZnRLQK8lp45A8EekkYX2abeKpyazZ33JnJqe+24T+fp5rTKzCHbF/+WbK//p0LQoDnJoaBNUUG0ZXx/IsqemKvrn9pSpFGWLiNO5TXBJmxjnrs208iald99Yz8Tej1Kzes1yb4l3fkgbs/hznPGhWK65Tca+Oy7WzfNQF7OMARzTPE9RwgToapspHPOHZurSkNbGyT6m1kHv28G1ZDx/S0pII</latexit> <latexit sha1_base64=\"r6h6YTsP2Cjh4gzlxZYLbW3TnHw=\">AAACynicjVHLSsNAFD3GV62vqks3wSK4KkkRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4XrMWl5ZXVwlpxfWNza7u0s9tMolQy3mBREMm27yU8ECFvKKEC3o4l98Z+wFv+6FzHW/dcJiIKr9Uk5t2xNwzFQDBPEdVKe9loelvtlcpOxTHLngduDsrIVz0qveAGfURgSDEGRwhFOICHhJ4OXDiIiesiI04SEibOMUWRvCmpOCk8Ykf0HdKuk7Mh7XXOxLgZnRLQK8lp45A8EekkYX2abeKpyazZ33JnJqe+24T+fp5rTKzCHbF/+WbK//p0LQoDnJoaBNUUG0ZXx/IsqemKvrn9pSpFGWLiNO5TXBJmxjnrs208iald99Yz8Tej1Kzes1yb4l3fkgbs/hznPGhWK65Tca+Oy7WzfNQF7OMARzTPE9RwgToapspHPOHZurSkNbGyT6m1kHv28G1ZDx/S0pII</latexit> <latexit sha1_base64=\"r6h6YTsP2Cjh4gzlxZYLbW3TnHw=\">AAACynicjVHLSsNAFD3GV62vqks3wSK4KkkRdFl048JFBfuAWksyndahaRImE6GE7vwBt/ph4h/oX3hnTEEtohOSnDn3nDtz7/XjQCTKcV4XrMWl5ZXVwlpxfWNza7u0s9tMolQy3mBREMm27yU8ECFvKKEC3o4l98Z+wFv+6FzHW/dcJiIKr9Uk5t2xNwzFQDBPEdVKe9loelvtlcpOxTHLngduDsrIVz0qveAGfURgSDEGRwhFOICHhJ4OXDiIiesiI04SEibOMUWRvCmpOCk8Ykf0HdKuk7Mh7XXOxLgZnRLQK8lp45A8EekkYX2abeKpyazZ33JnJqe+24T+fp5rTKzCHbF/+WbK//p0LQoDnJoaBNUUG0ZXx/IsqemKvrn9pSpFGWLiNO5TXBJmxjnrs208iald99Yz8Tej1Kzes1yb4l3fkgbs/hznPGhWK65Tca+Oy7WzfNQF7OMARzTPE9RwgToapspHPOHZurSkNbGyT6m1kHv28G1ZDx/S0pII</latexit> Figure 2: Overview of our DialoFlow.", "Figure 2 demonstrates the infrastructure of DialoFlow, which consists of the input embeddings, transformer blocks, a uni-directional Flow module, and a response generator.", "Input Embedding.", "DialoFlow takes the sum of token embedding, segment embedding, and position embedding as the model input.", "In particular, we insert a special token [C] at the end of each utterance, which is used to capture the overall dense representation of the dialogue history.", "To enhance the modeling of different speakers, we utilize segment embedding containing two types: [Speaker1] and [Speaker2].", "Transformer Block.", "A transformer block consists of the following key components: layer normalization, multi-head attention, and feed-forward layers.", "We employ the pre-normalization used in GPT-2 (Radford et al., 2019) instead of the post-normalization used in BERT (Devlin et al., 2019), as (Shoeybi et al., 2019) show that the post-normalization leads to performance degradation when the model size increases while pre-normalization enables stable large-scale training.", "DialoFlow keeps the uni-directional dialogue encoding and enables training on the dialogue level rather than on the context-response setting.", "We can obtain the history context at the k -th utterance encoded by the transformer blocks: C k = Transformer ( u <k ) , (2) where C k is the hidden states at the position of special token [C].", "And the hidden states at the position of each token u tk in the input sequence are denoted as h tk .", "Flow Module.", "To capture the dynamic information flow across the dialogue utterances, we design a Flow module to model the context changing scheme.", "The architecture of the Flow module is the same with one layer of transformer block.", "The Flow module takes all the previous context { C 1 , C 2 , ..., C k } as input and predicts the context at the ( k +1)-th utterance C (cid:48) k +1 : C (cid:48) k +1 = Flow( C 1 , C 2 , ..., C k ) .", "Response Generator.", "DialoFlow generates the utterance u k with the guidance of the predicted semantic influence I (cid:48) k .", "The response generator contains a feed-forward layer and a softmax layer to convert the hidden states to tokens.", "When generating the t -th word, the response generator takes the predicted semantic influence I (cid:48) k and the hidden states h t 1 k as input, and outputs the probability distribution of the t -th word: p ( u tk | I (cid:48) k , u <k , u <tk ) = softmax( W 1 [ I (cid:48) k ; h t 1 k ] + b 1 ) R | V | , (5) where | V | refers to the vocabulary size, W 1 and b 1 are learnable parameters.", "Different from traditional training approaches with context-response pair, DialoFlow is trained with the whole dialogue containing N utterances.", "Correspondingly, we design three training tasks to optimize the model: 1) Context Flow Modeling, 2) Semantic Influence Modeling, and 3) Response Generation Modeling.", "Context Flow Modeling.", "To capture the dynamic context flow, DialoFlow predicts the context at the k -th utterance C (cid:48) k based on the previous context sequence { C 1 , ..., C k 1 } .", "We minimize the L2 distance between the predicted context C (cid:48) k and the real context C k : LCFM = N (cid:88) k =1 || C k C (cid:48) k || 22 .", "Semantic Influence Modeling.", "To force the effectively modeling of semantic influence brought about by the n -th utterance at the context C n 1 , we design a bag-of-words loss using the predicted semantic influence I (cid:48) n : LSIM = N (cid:88) k =1 T (cid:88) t =1 log p ( u tk | I (cid:48) k ) = N (cid:88) k =1 T (cid:88) t =1 log f u tk , (7) where f u tk denotes the estimated probability of the t -th word u tk in the utterance u k .", "The function f is used to predict the words in the utterance u k in a non-autoregressive way: f = softmax( W 2 I (cid:48) k + b 2 ) R | V | , (8) where | V | refers to the vocabulary size, W 2 and b 2 are learnable parameters.", "Response Generation Modeling.", "The predicted semantic influence I (cid:48) k can also be regarded as a semantic expectation of the k -th utterance.", "We incorporate the predicted semantic influence I (cid:48) k into the response generation stage to guide the generation.", "The response generation objective is as follows: LRGM = N (cid:88) k =1 log p ( u k | I (cid:48) k , u <k ) = N (cid:88) k =1 T (cid:88) t =1 log p ( u tk | I (cid:48) k , u <k , u <tk ) .", "(9) The overall training objective of DialoFlow can be computed as follows: L = LCFM + LSIM + LRGM .", "By optimizing with the aforementioned three training objectives, DialoFlow can capture the dynamic information flow across the dialogue history.", "As the DialoFlow is trained on human-human dialogues, the context flow scheme can be regarded as the general expectation of the dialogue development.", "Therefore, the closer gap between the semantic influence brought by the chatbot's utterance and the expectation means the more human-likeness.", "Based on the consideration, we propose an automatic reference-free metric Flow score for interactive dialogue evaluation based on DialoFlow.", "In the human-bot conversation, when the bot generates a new utterance u k , we measure the similarity between the predicted semantic influence I (cid:48) k and the real semantic influence I k brought about by the utterance u k , which can be considered as the probability of the human-likeness of the utterance.", "To compute the similarity between the semantic influences, we measure both the cosine similarity and the length similarity: s k = cos( (cid:104) I (cid:48) k , I k (cid:105) ) length ( I (cid:48) k , I k ) = I (cid:48) k I k || I (cid:48) k || || I k || min( || I (cid:48) k || , || I k || ) max( || I (cid:48) k || , || I k || ) .", "(11)", "Note that we introduce the length similarity to consider the influence of length difference on semantic similarity.", "For the overall quality of the chatbot in the dialogue, we design a metric, which can be regarded as the dialogue-level perplexity: F low score = 2 1 M (cid:80) Mk log( sk +12 ) , (12) where M denotes the turn numbers of the chatbot utterances and s k +12 is to scale the similarity value to [0 , 1] .", "A lower Flow score corresponds to better dialogue quality.", "For model pre-training , we use the Reddit comments, which are collected by a third party and made publicly available on pushshift.io (Baumgart-ner et al., 2020).", "We clean the data following the pipeline used in the DialoGPT.", "2 For response generation , we employ the multi-reference Reddit Test Dataset (Zhang et al., 2020) which contains 6k examples with multiple references.", "We evaluate our pre-trained DialoFlow model on this dataset.", "The average length of the dialogue history in this dataset is 1.47.", "To further explore the dynamic information flow in the long dialogue history situation, we choose another popular open-domain dialogue dataset DailyDialog Dataset (Li et al., 2017), in which the average dialogue history length is about 4.66.", "DialoFlow is fine-tuned on the DailyDialog training set and evaluated on the DailyDialog multi-reference test set (Gupta et al., 2019).", "For interactive dialogue quality evaluation , we employ the collected data from the Interactive Evaluation of Dialog Track @ The Ninth Dialog System Technology Challenge (DSTC9) (Gunasekara et al., 2021), which contains 2200 human-bot conversations from 11 chatbots.", "For each conversation, there are 3 human ratings on the overall quality (0-5).", "We calculate the correlation between the results of our proposed metric and the human ratings on the chatbot level.", "Human-human conversations are always regarded to be better than human-bot conversations.", "Therefore, we randomly sample 200 human-human dialogues from the BST (Smith et al., 2020) dataset to see the metric's performance on the real human-human conversations.", "Pre-training Details.", "DialoFlow is pre-trained based on the pre-trained GPT-2 (Radford et al., 2019), since Zhang et al. (2020) show that DialoGPT trained from the pre-trained GPT-2 is much better than from scratch.", "There are three different model sizes: DialoFlow-base, DialoFlow-medium, and DialoFlow-large, which are trained 2 https://github.com/microsoft/DialoGPT from the pre-trained GPT2-base, GPT2-medium, GPT2-large, respectively.", "We used AdamW optimizer (Loshchilov and Hutter, 2019) with 0.01 weight decay and linear learning rate scheduler with 12000 warm-up steps.", "The learning rate is 2e-4 for the base and medium version and 1e-4 for the large version.", "We use the batch size of 1024 for all model sizes.", "We trained the base and medium models for up to 4 epochs and trained the large model for 2 epochs.", "It costs about two months on 8 Nvidia V100 GPUs to train the large model.", "Decoding Details.", "On the 6K Reddit multi-reference dataset, we use beam search (with beam width 10) on the DialoFlow-medium model and the DialoFlow-large model.", "We employ greedy search on the DialoFlow-base model, which keeps the same with (Zhang et al., 2020).", "On the DailyDialog dataset, we fine-tune the pre-trained DialoFlow and DialoGPT, select the checkpoint based on the validation loss, and then use beam search (with beam width 5) for decoding.", "For response generation , we compare our proposed DialoFlow with DialoGPT, a popular dialogue generation model pre-trained on the Reddit Comments.", "We choose the version trained from pre-trained OpenAI GPT-2 for comparison.", "For interactive dialogue evaluation , we compare our metric with the following metrics: 1) FED score (Mehri and Eskenazi, 2020) is an automatic evaluation metric which uses DialoGPT-large, without any fine-tuning or supervision.", "FED takes the DialoGPT-large as the user and calculates the likelihood of follow-up utterances based on several pre-set usual human utterances.", "FED works under the pre-set common human utterances, which can reveal the dialogue quality.", "2) Perplexity is used to measure the coherence of an utterance under the dialogue context.", "We employ DialoGPT-large to measure the perplexity for each utterance of the chatbot.", "We average the perplexity of all utterances in the whole dialogue as the baseline metric.", "For dialogue response generation , we perform automatic evaluation using common reference-based metrics: BLEU (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007), and NIST (Lin and Och, 2004).", "NIST is a variant of BLEU that weights n -gram matches by their information gain, i.e., it indirectly penalizes uninformative n -grams Method NIST-2 NIST-4 BLEU-2 BLEU-4 METEOR Entropy Avg Len Multi-reference Reddit Dataset DialoGPT (B, greedy) 2.39 2.41 10.54% 1.55% 7.53% 10.77 12.82 DialoFlow (B, greedy) 2.88 2.93 15.34% 3.97% 9.52% 9.27 15.43 DialoGPT (M, beam) 3.40 3.50 21.76% 7.92% 10.74% 10.48 11.34 DialoFlow (M, beam) 3.89 3.99 20.98% 7.36% 11.46% 10.42 13.37 DialoGPT (L, beam) 2.90 2.98 21.08% 7.57% 10.11% 10.06 10.68 DialoFlow (L, beam) 3.90 4.01 21.20% 7.42% 11.48% 10.42 13.38 Human 3.41 3.50 17.90% 7.48% 10.64% 10.99 13.10 Multi-reference DailyDialog Dataset DialoGPT (B, beam) 2.28 2.78 18.83% 6.63% 15.5% 9.80 18.82 DialoFlow (B, beam) 3.65 3.84 26.47% 10.12% 16.1% 9.62 12.00 DialoGPT (M, beam) 3.47 3.65 25.39% 9.99% 15.9% 9.64 12.88 DialoFlow (M, beam) 3.80 4.02 27.63% 11.33% 16.7% 9.83 12.06 DialoGPT (L, beam) 3.30 3.46 23.69% 9.20% 15.7% 9.78 13.24 DialoFlow (L, beam) 3.86 4.08 28.02% 11.57% 17.0% 9.87 12.08 Ablation Study on Multi-reference Reddit Dataset DialoFlow (M, beam) 3.89 3.99 20.98% 7.36% 11.46% 10.42 13.37 w/o SIM 3.85 3.96 21.36% 7.71% 11.26% 10.43 12.70 w/o SIM & CFM 3.79 3.89 21.33% 7.65% 11.25% 10.33 12.55 Table 1: The evaluation on 6K Reddit multi-reference dataset and on DailyDialog dataset.", "such as I don't know, which is a more suitable metric than BLEU when dealing with multi-reference test sets.", "We also use Entropy (Zhang et al., 2018) to evaluate the lexical diversity.", "We employ the evaluation scripts used by DialoGPT.", "For interactive dialogue evaluation , we compute the Pearson and Spearman correlation between the automatic metrics and human ratings.", "We use the pre-trained DialoFlow-large to compute our proposed Flow score.", "In this section, we show the performance of our pre-trained DialoFlow model on response generation as well as the performance of Flow score on interactive dialogue quality evaluation.", "Table 1 lists the comparison of our pre-trained DialoFlow with the pre-trained DialoGPT on the Reddit multi-reference dataset.", "Generally, DialoFlow-large achieves the highest score on the NIST and METEOR, while DialoGPT-medium performs better on the BLEU.", "The performance of our DialoFlow increases with the model size, while the DialoGPT gets the best performance with the medium size rather than the large size.", "As NIST can effec-Methods B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 Human Human 4.142 4.140 4.075 4.035 3.933 3.864 3.849 3.848 3.828 3.692 3.605 5.000 FED 4.988 4.818 4.621 4.670 4.555 4.739 4.438 4.355 4.651 4.799 3.608 3.468 Perplexity 600.0 521.2 441.2 561.6 367.7 1731 1879 13347 662.2 618.4 50.29 51.39 Flow 1.396 1.410 1.402 1.406 1.407 1.422 1.425 1.417 1.425 1.461 1.466 1.333 Table 3: The human ratings and automatic metrics for different chatbots.", "tively penalize common n-grams such as I don't know, the results reveal that DialoGPT tends to generate general responses while our DialoFlow model can create more informative responses.", "The results also reflect that modeling the dynamic flow is helpful to boost the conversion quality and avoid converging to the general responses.", "For the lexical diversity, DialoFlow performs similarly with the DialoGPT on Entropy.", "The average history length of the multi-reference Reddit dataset is only 1.45, which is a bit short.", "Thus, we conduct extensive experiments on the DailyDialog dataset (average history length = 4.66) to verify the performance gain on the long dialogue history.", "As shown in Table 1, DialoFlow shows significant improvements on all model sizes and on all metrics compared to the DialoGPT.", "The improvements on the DailyDialog dataset demonstrate that our DialoFlow model shows a great capacity to capture the dynamic information flow with a long history.", "Note that the performance improvement of the DailyDialog dataset is more remarkable than Reddit.", "In our opinion, conversations in Reddit are mainly the comments in forums, while in DailyDialog the dialogues are derived from daily life.", "Thus, in the DailyDialog dataset, the context flows are in the more similar schema, and the semantic influences are more predictable compared to the Reddit dataset.", "Human Evaluation.", "We conduct human evaluation on 200 randomly sampled cases from the DailyDialog test dataset using crowd-sourcing.", "We compare DialoFlow and DialoGPT on the medium version.", "Each response pair is randomly presented to 3 judges, who rank them for relevance, informativeness, and human-likeness.", "The overall judge preferences are presented as a percentage of the total, as shown in Table", "2. There is a strong preference for the responses generated by DialoFlow.", "The human evaluation demonstrates that modeling the dynamic information flow is effective for improving the quality of dialogue generation.", "Analysis of dialogue history length.", "Figure 3 shows the performance of our DialoFlow and the DialoGPT on different history lengths.", "Overall, our DialoFlow achieves better performance on all history lengths.", "In particular, when history length equals 1, that is, the response is generated based on one history utterance, our DialoFlow also gains a prominent boosting.", "We attribute it to the guidance of predicted semantic inference.", "Ablation Study.", "To explore the effect of the proposed training objectives, we conduct ablation studies on the medium version of DialoFlow, as shown in Table", "1. With all three training objectives, DialoFlow model achives the best performance on NIST and METEOR.", "When we drop the Semantic Influence Modeling task, the performance slightly decreases.", "When we further drop the Context Flow Modeling task, which means the end-to-end training, the performance decreases again.", "The results reveal that the Context Influence Modeling task is effective for dialogue modeling and the Semantic Influence Modeling task can prompt the CIM task.", "Results.", "Table 4 shows the chatbot-level correlations of different automatic metrics with human ratings on the DSTC9 Interactive Conversation dataset.", "Our proposed Flow score achieves strong Spearman correlation of 0.90 ( p< 0 . 001 ) and strong Pearson correlation of 0.91 ( p< 0 . 001 ).", "FED only shows moderate correlations with a chatbot-level Spearman correlation of 0.56 ( p< 0 . 1 ).", "Perplexity score shows a very weak correlation.", "On the one hand, the results reveal that our proposed Flow score can effectively estimate the overall chatbot", "quality.", "On the other hand, high correlation also demonstrates that the DialoFlow model captures the general dynamic information flow in the natural human-human conversation.", "Results Analysis.", "Table 3 shows the detailed human ratings, FED scores, perplexity, and our proposed Flow score for the 11 chatbots in the DSTC9 Interactive Dialogue Evaluation Track and the sampled human-human conversations.", "Good automatic metrics should perform well not only on human-bot conversations but also human-human conversations because the ultimate goal of the chatbot is to generate human-like responses.", "FED performs poorly on the human-human conversations compared to its performance on the other 11 chatbots.", "Our proposed Flow score takes the human-human conversations as the best one, and the Flow score gap between human-human conversations and the best chatbot is similar to the human rating gap.", "Analysis about Flow score.", "The Flow score can be regarded as the perplexity on the utterance level.", "There are many different expressions for a spe-cific semantic in natural conversations.", "Traditional word-level perplexity can estimate the coherence and fluency of the utterance but always performs unstably on variable expressions.", "The Flow score directly measures the semantic similarity and alleviates the problem with the traditional perplexity.", "Figure 4 shows the 2-D T-SNE visualization of the semantic context of a human-bot conversation encoded by our pre-trained DialoFlow model.", "The conversation can be split into four topics: greetings (1 4), talking about why bad day (5 13), explaining the terrible experience seeing the doctor (14 18), and discuss swimming (19 26).", "Correspondingly, in the visualization, the semantic context flow visualization changes a lot when the topic switches, revealing that DialoFlow can capture the dynamic information flow in the dialogue and effectively measure the semantic influence brought about by each utterance.", "Besides, it seems like that different speakers keep their own context flows.", "Multi-turn dialogue modeling.", "The modeling of multi-turn dialogue history mainly falls into two categories: 1) Flat concatenation.", "These works directly concatenate the dialogue history as the input sequence (Zhang et al., 2020), which can not capture the information dynamics.", "2) Hierarchical architectures.", "The hierarchical architecture is commonly used in the dialogue history understanding.", "Serban et al. (2016a) propose the hierarchical LSTM to generate responses.", "Li et al. (2019) introduce an incremental transformer to capture multi-turn dependencies.", "Shan et al. (2020); Gu et al. (2020) employ pre-trained BERT to encode individual utterances and design the utterance-level encoder to capture the turn-level structure.", "These methods suffer from the lack of context word-level information when encoding utterances.", "Different from these methods, our DialoFlow takes full advantage of both word-level information and utterance-level dynamic information.", "Besides, the proposed DialoFlow is pre-trained on the large-scale open-domain dialogue dataset.", "Pre-trained models for dialogue generation.", "Recent advances in pre-trained language models have great success in dialogue response generation.", "DialoGPT(Zhang et al., 2020), Plato-2 (Bao et al., 2020), Meena(Adiwardana et al., 2020), and Blender(Smith et al., 2020) achieve strong generation performances by training transformer-based language models on open-domain conversation corpus.", "In contrast, our proposed DialoFlow focuses on modeling the dynamic information flow in the pre-training process, and we design three training objectives to optimize the model.", "Interactive Dialogue Evaluation.", "Evaluating the quality of interactive dialogue automatically is a challenging problem, as there is no gold reference for the utterances.", "Mehri and Eskenazi (2020) propose the FED score, an automatic dialogue evaluation metric using pre-trained DialoGPT-large, which works with pre-set common human comments, like It is interesting to talk with you. , revealing the dialogue quality.", "However, the FED score has limited performance on those dialogues without apparent comments.", "Our Flow score entirely depends on the pre-trained DialoFlow model with no need for human integration.", "In this work, we proposed the DialoFlow to model the dynamic information flow across dialogue utterances by addressing the semantic influence brought about by each utterance.", "Specifically, we employed a uni-directional Flow module to model the context flow and designed three training objectives to optimize the DialoFlow model.", "Besides, upon the DialoFlow, we proposed the Flow score, an automatic reference-free evaluation metric for interactive dialogue evaluation, with the pre-trained DialoFlow.", "Experiments on response generation and dialogue evaluation all demonstrate that our method could effectively capture the dynamic information flow across utterances.", "For future work, we would like to apply the DialoFlow to the task-oriented dialogue and explore the application on the long text generation, such as the story generation.", "We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions.", "This work is supported by National Key R&D Program of China (NO. 2018AAA0102502)." ]
[ "abstain", "method", "objective", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "method", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "method", "objective", "objective", "objective", "objective", "objective", "other", "other" ]
[ "Multi-document summarization (MDS) aims to compress the content in large document collections into short summaries and has important applications in story clustering for newsfeeds, presentation of search results, and timeline generation.", "However, there is a lack of datasets that realistically address such use cases at a scale large enough for training supervised models for this task.", "This work presents a new dataset for MDS that is large both in the total number of document clusters and in the size of individual clusters.", "We build this dataset by leveraging the Wikipedia Current Events Portal (WCEP), which provides concise and neutral human-written summaries of news events, with links to external source articles.", "We also automatically extend these source articles by looking for related articles in the Common Crawl archive.", "We provide a quantitative analysis of the dataset and empirical results for several state-of-the-art MDS techniques.", "The dataset is available at https://github.com/complementizer/ wcep-mds-dataset .", "Text summarization has recently received increased attention with the rise of deep learning-based end-to-end models, both for extractive and abstractive variants.", "However, so far, only single-document summarization has profited from this trend.", "Multi-document summarization (MDS) still suffers from a lack of established large-scale datasets.", "This impedes the use of large deep learning models, which have greatly improved the state-of-the-art for various supervised NLP problems (Vaswani et al., 2017; Paulus et al., 2018; Devlin et al., 2019), and makes a robust evaluation difficult.", "Recently, several larger MDS datasets have been created: Zopf (2018); Liu et al. (2018); Fabbri et al. (2019).", "However, these datasets do not realistically resemble use Human-written summary EmperorAkihitoabdicatestheChrysanthemumThroneinfavorofhiselderson,CrownPrinceNaruhito.HeisthefirstEmperortoabdicateinovertwohundredyears,sinceEmperorKkakuin1817.", "Headlines of source articles (WCEP) Defining the Heisei Era: Just how peaceful were the past 30 years?", "cases with large automatically aggregated collections of news articles, focused on particular news events.", "This includes news event detection, news article search, and timeline generation.", "Given the prevalence of such applications, there is a pressing need for better datasets for these MDS use cases.", "In this paper, we present the Wikipedia Current Events Portal (WCEP) dataset, which is designed to address real-world MDS use cases.", "The dataset consists of 10,200 clusters with one human-written summary and 235 articles per cluster on average.", "We extract this dataset starting from the Wikipedia Current Events Portal (WCEP) 1 .", "Editors on WCEP write short summaries about news events and provide a small number of links to relevant source articles.", "We extract the summaries and source articles from WCEP and increase the number of source articles per summary by searching for similar articles in the Common Crawl News dataset 2 .", "As a result, we obtain large clusters of highly redundant news articles, resembling the output of news clustering applications.", "Table 1 shows an example of 1 https://en.wikipedia.org/wiki/Portal: Current_events 2 https://commoncrawl.org/2016/10/ news-dataset-available/ an event summary, with headlines from both the original article and from a sample of the associated additional sources.", "In our experiments, we test a range of unsupervised and supervised MDS methods to establish baseline results.", "We show that the additional articles lead to much higher upper bounds of performance for standard extractive summarization, and help to increase the performance of baseline MDS methods.", "We summarize our contributions as follows: We present a new large-scale dataset for MDS, that is better aligned with several real-world industrial use cases.", "We provide an extensive analysis of the properties of this dataset.", "We provide empirical results for several baselines and state-of-the-art MDS methods aiming to facilitate future work on this dataset.", "Extractive MDS models commonly focus on either ranking sentences by importance (Hong and Nenkova, 2014; Cao et al., 2015; Yasunaga et al., 2017) or on global optimization to find good combinations of sentences, using heuristic functions of summary quality (Gillick and Favre, 2009; Lin and Bilmes, 2011; Peyrard and Eckle-Kohler, 2016).", "Several abstractive approaches for MDS are based on multi-sentence compression and sentence fusion (Ganesan et al., 2010; Banerjee et al., 2015; Chali et al., 2017; Nayeem et al., 2018).", "Recently, neural sequence-to-sequence models, which are the state-of-the-art for abstractive single-document summarization (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017), have been used for MDS, e.g., by applying them to extractive summaries (Liu et al., 2018) or by directly encoding multiple documents (Zhang et al., 2018; Fabbri et al., 2019).", "Datasets for MDS consist of clusters of source documents and at least one ground-truth summary assigned to each cluster.", "Commonly used traditional datasets include the DUC 2004 (Paul and James, 2004) and TAC 2011 (Owczarzak and Dang, 2011), which consist of only 50 and 100 document clusters with 10 news articles on average.", "The MultiNews dataset (Fabbri et al., 2019) is a recent large-scale MDS dataset, containing 56,000 clusters, but each cluster contains only 2.3 source documents on average.", "The sources were hand-picked by editors and do not reflect use cases with large automatically aggregated document collections.", "MultiNews has much more verbose summaries than WCEP.", "Zopf (2018) created the autoh MDS dataset by using the lead section of Wikipedia articles as summaries, and automatically searching for related documents on the web, resulting in 7,300 clusters.", "The WikiSum dataset (Liu et al., 2018) uses a similar approach and additionally uses cited sources on Wikipedia.", "The dataset contains 2.3 million clusters.", "These Wikipedia-based datasets also have long summaries about various topics, whereas our dataset focuses on short summaries about news events.", "Wikipedia Current Events Portal: WCEP lists current news events on a daily basis.", "Each news event is presented as a summary with at least one link to external news articles.", "According to the editing guidelines 3 , the summaries must be short, up to 30-40 words, and written in complete sentences in the present tense, avoiding opinions and sensationalism.", "Each event must be of international interest.", "Summaries are written in English, and news sources are preferably English.", "Obtaining Articles Linked on WCEP: We parse the WCEP monthly pages to obtain a list of individual events, each with a list of URLs to external source articles.", "To prevent the source articles of the dataset from becoming unavailable over time, we use the Save Page Now feature of the Internet Archive 4 .", "We request snapshots of all source articles that are not captured in the Internet Archive yet.", "We download and extract all articles from the Internet Archive Wayback Machine 5 using the newspaper3k 6 library.", "Additional Source Articles: Each event from WCEP contains only 1.2 sources on average, meaning that most editors provide only one source article when they add a new event.", "In order to extend the set of input articles for each of the ground-truth 3 https://en.wikipedia.org/wiki/ Wikipedia:How_the_Current_events_page_ works 4 https://web.archive.org/save/ 5 https://archive.org/web/ 6 https://github.com/codelucas/ newspaper summaries, we search for similar articles in the Common Crawl News dataset 7 .", "We train a logistic regression classifier to decide whether to assign an article to a summary, using the original WCEP summaries and source articles as training data.", "For each event, we label the article-summary pair for each source article of the event as positive.", "We create negative examples by pairing each event with source articles from other events of the same date, resulting in a positive-negative ratio of 7:100.", "The features used by the classifier are listed in Table 2.", "tf-idf similarity between title and summary tf-idf similarity between body and summary No.", "entities from summary appearing in title No.", "linked entities from summary appearing in body Table 2: Features used in the article-summary binary classifier.", "We use unigram bag-of-words vectors with TF-IDF weighting and cosine similarity for the first two features.", "The entities are phrases in the WCEP summaries that the editors annotated with hyper-links to other Wikipedia articles.", "We search for these entities in article titles and bodies by exact string matching.", "The classifier achieves 90% Precision and 74% Recall of positive examples on a hold-out set.", "For each event in the original dataset, we apply the classifier to articles published in a window of 1 days of the event date and add those articles that pass a classification probability of 0.9.", "If an article is assigned to multiple events, we only add it to the event with the highest probability.", "This procedure increases the number of source articles per summary considerably (Table 4).", "Final Dataset: Each example in the dataset consists of a ground-truth summary and a cluster of original source articles from WCEP, combined with additional articles from Common Crawl.", "The dataset has 10,200 clusters, which we split roughly into 80% training, 10% validation and 10% test (Table 3).", "The split is done chronologically, such that no event dates overlap between the splits.", "We also create a truncated version of the dataset with a maximum of 100 articles per cluster, by retaining all original articles and randomly sampling from the additional articles.", "Table 3 shows the number of clusters and of articles from all clusters combined, for each dataset partition.", "Table 4 shows statistics for individual clusters.", "We show statistics for the entire dataset (WCEP-total), and for the truncated version (WCEP-100) used in our experiments.", "The high mean cluster size is mostly due to articles from Common Crawl.", "To investigate how related the additional articles obtained from Common Crawl are to the summary they are assigned to, we randomly select 350 for manual annotation.", "We compare the article title and the first three sentences to the assigned summary, and pick one of the following three options: 1) \"on-topic\" if the article focuses on the event described in the summary, 2) \"related\" if the article mentions the event, but focuses on something else, e.g., follow-up, and 3) \"unrelated\" if there is no mention of the event.", "This results in 52% on-topic, 30% related, and 18% unrelated articles.", "We think that this amount of noise is acceptable, as it resembles noise present in applications with automatic content aggregation.", "Furthermore, summarization performance benefits from the additional articles in our experiments (see Section 5).", "Human-written summaries can vary in the degree of how extractive or abstractive they are, i.e., how much they copy or rephrase information in source documents.", "To quantify extractiveness in our dataset, we use the measures coverage and density defined by Grusky et al. (2018): Coverage ( A, S ) = 1 | S | (cid:88) f F ( A,S ) | f | (1) Density ( A, S ) = 1 | S | (cid:88) f F ( A,S ) | f | 2 (2) Given an article A consisting of tokens (cid:104) a 1 , a 2 , ..., a n (cid:105) and its summary S = (cid:104) s 1 , s 2 , ..., s n (cid:105) , F ( A, S ) is the set of token sequences (fragments) shared between A and S , identified in a greedy manner.", "Coverage measures the proportion of words from the summary appearing in these fragments.", "Density is related to the average length of shared fragments and measures how well a summary can be described as a series of extractions.", "In our case, A is the concatenation of all articles in a cluster.", "Figure 1 shows the distribution of coverage and density in different summarization datasets.", "WCEP-10 refers to a truncated version of our dataset with a maximum cluster size of 10.", "The WCEP dataset shows increased coverage if more articles from Common Crawl are added, i.e., all words of a summary tend to be present in larger clusters.", "High coverage suggests that retrieval and copy mechanisms within a cluster can be useful to generate summaries.", "Likely due to the short summary style and editor guidelines, high density, i.e., copying of long sequences, is not as common in WCEP as in the MultiNews dataset.", "Due to scalability issues of some of the tested methods, we use the truncated version of the dataset with a maximum of 100 articles per cluster (WCEP-100).", "The performance of the methods that we consider starts to plateau after 100 articles (see Figure 2).", "We set a maximum summary length of 40 tokens, which is in accordance with the editor guidelines in WCEP.", "This limit also corresponds to the optimal length of an extractive oracle optimizing ROUGE F1-scores 8 .", "We recommend to evaluate models with a dynamic (potentially longer) output length using F1-scores and optionally to provide Recall results with truncated summaries.", "Extractive methods should only return lists of full untruncated sentences up to that limit.", "We evaluate lowercased versions of summaries and do not modify ground-truth or system summaries otherwise.", "We compare and evaluate systems using F1-score and Recall of ROUGE-1, ROUGE-2, and ROUGE-L (Lin, 2004).", "In the following, we abbreviate ROUGE-1 F1-score and Recall with R1-F and R1-R, etc. 5.2 Methods We evaluate the following oracles and baselines to put evaluation scores into perspective: ORACLE (MULTI ): Greedy oracle, adds sentences from a cluster that optimize R1-F of the constructed summary until R1-F decreases.", "ORACLE (SINGLE ): Best of oracle summaries extracted from individual articles in a cluster.", "LEADORACLE : The lead (first sentences up to 40 words) of an individual article with the best R1-F score within a cluster.", "RANDOMLEAD : The lead of a randomly selected article, which is our alternative to the lead baseline used in single-document summarization.", "We evaluate the unsupervised methods TEXTRANK (Mihalcea and Tarau, 2004), CENTROID (Radev et al., 2004) and SUBMODULAR (Lin and Bilmes, 2011).", "We test the following supervised methods: TSR: Regression-based sentence ranking using statistical features and averaged word embeddings (Ren et al., 2016).", "8 We tested lengths 25 to 50 in steps of", "5. For these tests, the oracle is forced to pick a summary up to that length.", "BERTREG : Similar framework to TSR but with sentence embeddings computed by a pre-trained BERT model (Devlin et al., 2019).", "Refer to Appendix A.1 for more details.", "We tune hyperparameters of the methods described above on the validation set of WCEP-100 (Ap-pendix A.2).", "We also test a simple abstractive baseline, SUBMODULAR + ABS : We first create an extractive multi-document summary with a maximum of 100 words using SUBMODULAR .", "We pass this summary as a pseudo-article to the abstractive bottom-up attention model (Gehrmann et al., 2018) to generate the final summary.", "We use an implementation from OpenNMT 9 with a model pre-trained on the CNN/Daily Mail dataset.", "All tested methods apart from ORACLE (MULTI & SINGLE ) observe the length limit of 40 tokens.", "Table 5 presents the results on the WCEP test set.", "The supervised methods TSR and BERTREG show advantages over unsupervised methods, but not by a large margin, which poses an interesting challenge for future work.", "The high extractive bounds defined by ORACLE (SINGLE ) suggest that identifying important documents before summarization can be useful in this dataset.", "The dataset does not favor lead summaries: RANDOMLEAD is of low quality, and LEADORACLE has relatively low F-scores (although very high Recall).", "The SUBMODULAR + ABS heuristic for applying a pre-trained abstractive model does not perform well.", "Figure 2 shows how the performance of several methods on the test set increases with different amounts of additional articles from Common Crawl.", "Using 10 additional articles causes a steep improvement compared to only using the original source articles from WCEP.", "However, using more than 100 articles only leads to minimal gains.", "We present a new large-scale MDS dataset for the news domain, consisting of large clusters of news articles, associated with short summaries about news events.", "We hope this dataset will facilitate the creation of real-world MDS systems for use cases such as summarizing news clusters or search results.", "We conducted extensive experiments to establish baseline results, and we hope that future work on MDS will use this dataset as a benchmark.", "Important challenges for future work include how to scale deep learning methods to such large amounts of source documents and how to close the gap to the oracle methods.", "This work was funded by the Irish Research Council (IRC) under grant number EBPPG/2018/23, the Science Foundation Ireland (SFI) under grant number 12/RC/2289_P2 and the enterprise partner Aylien Ltd." ]
[ "abstain", "abstain", "objective", "method", "objective", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "result", "other", "result", "result", "objective", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "other", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "other" ]
[ "Training the generative models with minimal corpus is one of the critical challenges for building open-domain dialogue systems.", "Existing methods tend to use the meta-learning framework which pre-trains the parameters on all non-target tasks then fine-tunes on the target task.", "However, fine-tuning distinguishes tasks from the parameter perspective but ignores the model-structure perspective, resulting in similar dialogue models for different tasks.", "In this paper, we propose an algorithm that can customize a unique dialogue model for each task in the few-shot setting.", "In our approach, each dialogue model consists of a shared module, a gating module, and a private module.", "The first two modules are shared among all the tasks, while the third one will differentiate into different network structures to better capture the characteristics of the corresponding task.", "The extensive experiments on two datasets show that our method outperforms all the baselines in terms of task consistency, response quality, and diversity.", "Generative dialogue models often require a large amount of dialogues for training, and it is challenging to build models that can adapt to new domains or tasks with limited data.", "With recent advances in large-scale pre-training [Peters et al. , 2018; Howard and Ruder, 2018; Radford et al. , 2018; Devlin et al. , 2018], we can first pre-train a generative model on large-scale dialogues from the non-target domains and then fine-tune on the task-specific data corpus [Wang et al. , 2019a; Alt et al. , 2019a; Klein, 2019].", "While pre-training is beneficial, such models still require sufficient task-specific data for fine-tuning.", "They cannot achieve satisfying performance when very few examples Corresponding author are given [Bansal et al. , 2019].", "Unfortunately, this is often the case in many dialogue generation scenarios.", "For example, in personalized dialogue generation, we need to quickly adapt to the response style of a user's persona by just a few his or her dialogues [Madotto et al. , 2019; Zhang et al. , 2018]; in emotional dialogue generation, we need to generate a response catering to a new emoji using very few utterances containing this emoji [Zhou et al. , 18; Zhou and Wang, 2018].", "Hence, this is the focus of our paper few-shot dialogue generation, i.e. training a generative model that can be generalized to a new task (domain) within k -shots of its dialogues.", "A few works have been proposed to consider few-shot dialogue generation as a meta-learning problem [Madotto et al. , 2019; Qian and Yu, 2019; Mi et al. , 2019].", "They all rely on the popular model-agnostic meta-learning (MAML) method [Finn et al. , 2017].", "Take building personalized dialogue models as an example, previous work treats learning dialogues with different personas as different tasks [Madotto et al. , 2019; Qian and Yu, 2019].", "They employ MAML to find an initialization of model parameters by maximizing the sensitivity of the loss function when applied to new tasks.", "For a target task, its dialogue model is obtained by fine-tuning the initial parameters from MAML with its task-specific training samples.", "Despite the apparent success in few-shot dialogue generation, MAML still has limitations [Zint-graf et al. , 2019].", "The goal of generative dialogue models is to build a function mapping a user query to its response, where the function is determined by both the model structure and parameters [Brock et al. , 2018].", "By fine-tuning with a fixed model structure, MAML only searches the optimal parameter settings in the parameter optimization perspective but ignores the search of optimal network structures in the structure optimization perspective.", "Moreover, language data are inherently discrete and dialogue models are less vulnerable to input changes than image-related models [Niu and Bansal, 2018], which means gradients calculated from a few sentences may not be enough to change the output word from one to another.", "Thus there is a need to develop an effective way to adjust MAML for large model diversity in dialogue generation tasks.", "In this paper, we propose the Customized Model Agnostic Meta-Learning algorithm (CMAML) that is able to customize dialogue models in both parameter and model structure perspective under the MAML framework.", "The dialogue model of each task consists of three parts: a shared module to learn the general language generation ability and common characteristics among tasks, a private module to model the unique characteristic of this task, and a gate to absorb information from both shared and private modules then generate the final outputs.", "The network structure and parameters of the shared and gating modules are shared among all tasks, while the private module starts from the same network but differentiates into different structures to capture the task-specific characteristics.", "In summary, our contributions are as follows: We propose the CMAML algorithm that can customize dialogue models with different network structures for different tasks in the few-shot setting.", "The algorithm is general and well unified to adapt to various few-shot generation scenarios.", "We propose a pruning algorithm that can adjust the network structure for better fitting the training data.", "We use this strategy to customize unique dialogue models for different tasks.", "We investigate two crucial impact factors for meta-learning based methods, i.e., the quantity of training data and task similarity.", "We then describe the situations where the meta-learning can outperform other fine-tuning methods.", "Few-shot Dialogue Generation.", "The past few years have seen increasing attention on building dialogue models in few-shot settings, such as personalized chatbots that can quickly adapt to each user's profile or knowledge background [Zhang et al. , 2018; Madotto et al. , 2019], or that respond with a specified emotion [Zhou et al. , 18; Zhou and Wang, 2018].", "Early solutions are to use explicit [Tian et al. , 2017; Zhang et al. , 2018; Zhou et al. , 18] or implicit [Li et al. , 2016b; Zhou and Wang, 2018; Zhou et al. , 18] task descriptions, then introduce this information into the generative models.", "However, these methods require manually created task descriptions, which are not available in many practical cases.", "An alternative promising solution to building few-shot dialogue models is the meta-learning methods, especially MAML [Finn et al. , 2017].", "Madotto et al. (2019) propose to regard learning with the dialogue corpus of each user as a task and endow the personalized dialogue models by fine-tuning the initialized parameters on the task-specific data.", "Qian and Yu (2019) and Mi et al. (2019) treat the learning from each domain in multi-domain task-oriented dialogue generation as a task, and apply MAML in a similar way.", "All these methods do not change the original MAML but directly apply it to their scenarios due to the model-agnostic property of MAML.", "Thus, task differentiation always counts on fine-tuning, which only searches the best model for each task at the parameter level but not the model structure level.", "Meta-learning.", "Meta-learning has achieved promising results in many NLP problems recently due to its fast adaptation ability on a new task using very few training data [Yu et al. , 2019; Wang et al. , 2019b; Obamuyide and Vlachos, 2019b; Alt et al. , 2019b].", "In general, there are three categories of meta-learning methods: metric-based methods [Vinyals et al. , 2016; Snell et al. , 2017; Sung et al. , 2018; Ye and Ling, 2019] which encode the samples into an embedding space along with a learned distance metric and then apply a matching algorithm, model-based methods [Santoro et al. , 2016; Obamuyide and Vlachos, 2019a] which depend on the model structure design such as an external memory storage to facilitate the learning process, and optimization-based methods [Finn et al. , 2017; Andrychowicz et al. , 2016; Huang et al. , 2018] which learn a good network initialization from which fine-tuning can converge to the optimal point for a new task with only a few examples.", "Methods belonging to the first two are proposed for classification, and those in the third category are model-agnostic.", "Therefore, it is intuitive to apply the optimization-based methods, in which MAML is most popular, for dialogue generation tasks.", "However, some researchers found that the original MAML has limited ability to model task-specific characteristics in the image or text classification scenarios [Jiang et al. , 2018; Sun et al. , Really I Really Emm Really , ? ? II I like don't like cats know cats ? II hate hate pets pets Similar tasks Similar tasks All tasks I have a dog Query Encoder User-i Decoder User-j Decoder Training Corpus shared module gating module privatemodule / tasksthatarechosen/notfortraining dataflow similartasks Generative Models Figure 1: The proposed CMAML algorithm applying on the personalized dialogue systems. Each customized dialogue model Seq2SPG consists of a shared, a private, and a gating module. The shared and gating module are the same among users and are trained on all tasks. The private module is unique for each user to describe this user's persona, and is trained on the corresponding and similar tasks. The lines in color indicate the data flow directions. 2019; Liu et al. , 2020].", "Jiang et al. (2018) build an attention layer over the convolutional layers, where the convolutional layer is for general features and the attention layer is for task-specific features.", "Sun et al. (2019) propose to learn a task-specific shifting and scaling operation on the general shared feed-forward layers.", "However, the involved operations in these two methods such as shifting and scaling are designed for feed-forward networks, and can not be applied to the generative models which generally rely on Seq2seq [Sutskever et al. , 2014] models with recurrent GRU [Cho et al. , 2014] or LSTM [Hochreiter and Schmidhu-ber, 1997] cells.", "In this paper, we propose a new meta-learning algorithm based on MAML that can enhance task-specific characteristics for generation models.", "In this section, we firstly describe the network structure of the proposed dialogue model, and then briefly introduce its pre-training.", "We aim to build dialogue models for different generation tasks in the few-shot setting.", "Now, we first describe the dialogue model of each task to be used in our training algorithm.", "It involves three network modules and noted as Seq2SPG (in Figure 1): Shared Module.", "It gains the basic ability to generate a sentence and thus its parameters are shared among all tasks.", "We employ a prevailing Seq2seq dialogue model [Bahdanau et al. , 2014].", "At each decoding step t , we feed the word x t and last hidden state h t 1 to the decoding cell, and obtain an output distribution o s over the vocabulary.", "Private Module.", "It aims at modeling the unique characteristics of each task.", "We design a multilayer perception (MLP) in the decoder to fulfill this goal.", "Each task has its unique MLP network, which starts from the same initialization and then evolves into different structures during training.", "At each decoding step t , the MLP takes the word x t and the output h t 1 of the shared module at step t 1 as input, then outputs a distribution o p over the vocabulary.", "In our experiments, we also explore different inputs for the private module.", "Gating Module.", "We use a gate to fuse information from the shared and private modules: g s = tanh( W s [ o s , o p ] + b s ) g p = tanh( W p [ o s , o p ] + b p ) o = g s o s + g p o p (1) where W s , W p , b s , b p are parameters, is element-wise product, and o is the word distribution.", "For the rest of the paper, p ( T ) denotes the task distribution, T i denotes the i -th task to be trained, D traini and D validi denotes the training and validation corpus of task T i , and i denotes all training parameters of the dialogue model for T i , which include parameters s / pi / g in the shared/private/gating module respectively.", "we consider a model represented by a parameterized function f with parameters .", "The model training for all tasks consists of two steps: pre-training and customized model training.", "In pre-training, CMAML employs the vanilla MAML to obtain a pre-trained dialogue model as the initial model for all tasks.", "At the beginning of the MAML, are randomly initialized.", "Then, two main procedures perform iteratively: meta-training and meta-testing.", "In meta-training, MAML first samples a set of tasks T i p ( T ) .", "Then, for each task i , MAML adapts to get (cid:48) i with the task-specific data, which is, (cid:48) i = LD traini ( f ( )) (2) In the meta-testing, MAML tests tasks T i p ( T ) with (cid:48) i to obtain the losses and then updates by = (cid:88) T i p ( T ) LD validi ( f ( (cid:48) i )) (3) Here, and are hyper-parameters.", "In standard MAML, each task obtains its parameters i by fine-tuning the pre-trained .", "However, recall that fine-tuning fails to search the best model in the network structure perspective.", "Also, the generative models are less vulnerable to input changes, thus a few utterances may not be enough to adapt into diverse i for different tasks.", "To address these issues, we do not perform direct fine-tuning on each task, but design our second training step Customized Model Training, in which the pre-trained private module can evolve into different structures to capture the characteristics of each task and encourage model diversity.", "After obtaining the pre-trained model from MAML, we employ Customized Mode Training with the following two updating steps:", "Private Network Pruning .", "This step is applied for the private module only, which is to differentiate the MLP structure of each task.", "Each task has a different MLP structure by retaining its own subset of active MLP parameters in order to characterize the uniqueness of this task.", "Joint Meta-learning .", "In this step, we re-train parameters of all three modules of each task using MAML again, but each private module is with its pruned MLP structure now.", "Also, similar tasks with similar pruned MLP structures are jointly trained in order to enrich the training data.", "s / p / g in the shared/private/gating module.", "In this step, the private module with parameters p will evolve into different structures with parameters pi to capture the task's unique characteristics.", "First, we fine-tune the whole dialogue model of each task from the MAML initialization with its own training data and add an L-1 regularization on the parameters of the private module.", "The goal of L-1 regularization here is to make the parameters sparse such that only parameters beneficial to generate task-specific sentences are active.", "Second, we apply an up-to-bottom strategy to prune the private MLP for each task.", "This is equal to selecting edges in the fully connected layers in the MLP.", "We do not prune the layers connected to the input and output of the MLP.", "For the rest layers, we start the pruning from the one closest to the output first.", "For the l -th layer, we consider layers above it ( > l ) are closer to the output, and its lower layers ( < l ) are closer to the input.", "When we process the l -th layer, its upper layers should already be pruned.", "We only keep edges of the current processed layer whose weight excels a certain threshold .", "If all edges in the l layer connected to a node is pruned, all edges connected to this node in the l 1 layer will also be pruned.", "In this way, the parameters in private module p differentiates into | T | parameters pi , where each pi is a subset of p .", "The pruning algorithm described above is illustrated in Algorithm", "1. 4.2 Joint Meta-learning So far, every task has a unique network structure in its private module.", "Now we jointly train the whole dialogue models of all tasks.", "We start from the pre-trained MAML initialization again.", "For the shared and gating modules, all tasks share the same parameters, and they are trained with all training data.", "The private module, which is to capture the uniqueness of each task, is supposed to be trained on task-specific data.", "However, we do not have sufficient training data for each task in the few-shot setting, thus the private module may not be trained well.", "Fortunately, all private modules evolve from the same MLP structure, and similar tasks naturally share overlapped network structures, i.e. remaining edges after pruning are overlapped.", "This inspires us to train each edge in the private MLP by all training samples of tasks in which this edge is not pruned.", "Concretely, we train the private MLP in this way: Algorithm 1: Private Network Pruning Input: All parameters p in the private MLP module, the sparsity threshold , the total number of layers L in the private MLP module.", "for each edge e in the MLP, if it is active in more than one tasks, its corresponding parameters pe are updated on the data of all task j 's, in which the edge is active, i.e. pe pj :", "where each pi / (cid:48) pi only contains the pe / (cid:48) pe 's of all active edges in the i -th task.", "We summarize the gradient updates of the three modules in our proposed dialogue model during customized model training in Algorithm", "2. For the shared and gating module, gradients are updated in the same way as MAML.", "The update of the private module is replaced by the above Eq.", "4 and Eq.", "5 introduced in joint meta-learning.", "The loss function used to calculate the gradients in our model is the negative log-likelihood of generating the response r given the input query q as, L = log p ( r | q, s , p , g ) (6) Algorithm 2: Customized Model Training Input: The distribution over the task set p ( T ) , the step size and .", "We perform experiments in Persona-chat [Madotto et al. , 2019] and MojiTalk [Zhou and Wang, 2018], which are treated as few-shot dialogue generation tasks in previous work [Zhang et al. , 2018; Madotto et al. , 2019; Zhou and Wang, 2018; Zhou et al. , 18].", "Persona-chat has 1137/99/100 users for train-ing/validation/evaluation, and each user has 121 utterances on average.", "We follow the previous work [Madotto et al. , 2019] and concatenate all the contextual utterances including the query as the input sequence.", "We regard building a dialogue model for a user as a task on this dataset.", "MojiTalk has 50/6/8 emojis for training/validation/evaluation.", "Each training/validation emoji has 1000 training samples on average, and each evaluation emoji has 155 samples on average.", "We regard generating responses with a designated emoji as a task.", "On both datasets, the data ratio for meta-training and meta-testing is 10:1.", "We implement our shared module based on the Seq2seq model with pre-trained Glove embedding [Pennington et al. , 2014] and LSTM unit, and use a 4-layer MLP for the private module 1 .", "The dimension of word embedding, hidden state, and MLP's output are set to 300.", "In CMAML, we pretrain the model for 10 epochs and re-train each model for 5 steps to prune the private network.", "The L-1 weight in the re-training stage is 0.001, and the threshold is 0.05.", "We follow other hyperparame-ter settings in Madotto et al.", "[2019].", "1 Code is available at https://github.com/zequnl/CMAML 5.3 Competing Methods Pretrain-Only : We pre-train a unified dialogue generation model with data from all training tasks then directly test it on the testing tasks.", "We try three base generation models: the Seq2seq [Bahdanau et al. , 2014] and the Speaker model [Li et al. , 2016b] and the Seq2SPG proposed in Section3.1.", "Speaker incorporates the task (user/emoji) embeddings in the LSTM cell, and the task embeddings of testing tasks are random parameters in this setting.", "Finetune : We fine-tune the pre-trained models on each testing task, denoted as Seq2seq-F , Speaker-F and Seq2SPG-F .", "MAML [Madotto et al. , 2019]: We apply the MAML algorithm on the base model Seq2seq and Seq2SPG, and note them as MAML-Seq2seq and MAML-Seq2SPG .", "MAML-Seq2SPG uses the same base model as the proposed CMAML but does not apply the pruning algorithm, which helps to verify the effectiveness of the pruning algorithm and joint meta-learning.", "Note that We did not apply MAML on Speaker model as it shows no improvement comparing with Seq2seq.", "CMAML : We try two variants of our proposed algorithm.", "CMAML-Seq2SPG is our full model (equal to CMAML in previous sections), where the dialogue Seq2SPG is the base model and pruning algorithm is applied for customizing unique model structures for tasks.", "CMAML-Seq2SP (cid:48) G uses a different base model noted as Seq2SP (cid:48) G, where the private module only takes the output of the shared module as the input.", "Pruning algorithm is also applied in private module for network customization.", "Response quality/diversity: We use BLEU [Pa-pineni et al. , 2002] to measure the word overlap between the reference and the generated sentence; PPL , the negative logarithm of the generated sentence; Dist-1 [Li et al. , 2016a; Song et al. , 2017, 2018] to evaluate the response diversity, which calculates the ratio of distinct 1-gram in all test generated responses.", "Task consistency: We use C score [Madotto et al. , 2019] in Persona-chat, which uses a pre-trained natural language inference model to measure the response consistency with persona description, and E-acc [Zhou and Wang, 2018] in MojiTalk, which uses an emotion classifier to predict the correlation between a response and the designated emotion.", "Model difference: It is hard to measure the models ability of customization as we do not have the ground-truth model.", "Hence, we define the average model difference of pairwise tasks as the Diff Score of each method, and the model difference of a method before and after fine-tuning as Score .", "The model difference between T i and T j is the Euclidean distance of their parameters normalized by their parameter count: D ( T i , T j ) = || i j || 2 M .", "Here, i / j includes all model parameters of this task, M is the total parameter number of the model.", "A set of models that capture the unique characteristics of each task should be different from each other and will have a higher Diff score, indicating that a large Diff score is a sufficient condition for a strong customization ability.", "Similarly, a model that changes a lot for task specific adaptation during fine-tuning will achieve a higher Score , indicating that Score is also a sufficient condition for a good adaptation ability.", "Human Evaluation.", "We invited 3 well-educated graduated students to annotate the 100 generated replies for each method.", "For each dataset, the annotators are requested to grade each response in terms of quality and task consistency (i.e. personality consistency in Persona-Chat and emoji consistency in MojiTalk) independently in three scales: 2 (for good), 1 (for fair) and 0 (for bad).", "quality measures the appropriateness of replies, and we refer 2 for fluent, highly consistent (between query and reply), and informativeness, 1 for few grammar mistakes, moderate consistent, and universal reply, and 0 for incomprehensible or unrelated topic.", "task consistency measures whether a reply is consistent with the characteristics of a certain task, and we refer 2 for highly consistent, 1 for no conflicted and 0 for contradicted.", "Notice that the user description (Persona dataset) and sentences with a certain emoji (Mojitalk dataset) are provided as the references.", "Volunteers, instead of authors, conduct the double-blind annotations on shuffled samples to avoid subjective bias.", "Quality/Diversity.", "In the Persona-chat dataset, Pretrain-Only methods provide the borderlines of all methods.", "In Pretrain-Only , Seq2SPG achieves the best performance in terms of both automatic and human measurements, indicating the appropri-Method Human Evaluation Automatic Metrics Model Difference Quality Task Consistency PPL BLEU Dist-1 C score/E-acc Diff Score Score Persona-Chat Seq2seq 0.67 0.10 37.91 1.27 0.0019 -0.16 0.00 0.00 Speaker 0.85 0.10 40.17 1.25 0.0037 -0.14 0.00 0.00 Seq2SPG 0.67 0.03 36.46 1.41 0.0023 -0.14 0.00 0.00 Seq2seq-F 0.78 0.11 33.65 1.56 0.0046 -0.05 17.97 9.19 Speaker-F 0.87 0.25 35.61 1.52 0.0059 0.03 285.11 143.90 Seq2SPG-F 0.7 0.07 32.68 1.54 0.0045 -0.05 292.85 156.30 MAML-Seq2seq 0.97 0.37 37.43 1.54 0.0087 0.14 134.01 67.79 MAML-Seq2SPG 0.85 0.36 35.89 1.70 0.0074 0.16 401.28 198.90 CMAML-Seq2SP (cid:48) G 0.98 0.58 37.32 1.43 0.0089 0.15 479.21 238.64 CMAML-Seq2SPG 1.15 0.69 36.30 1.70 0.0097 0.18 514.44 263.82 MojiTalk Seq2seq 0.56 0.39 218.95 0.36 0.0342 0.73 0.00 0.00 Speaker 0.38 0.26 418.96 0.19 0.0530 0.70 0.00 0.00 Seq2SPG 0.77 0.46 158.74 0.64 0.0239 0.74 0.00 0.00 Seq2seq-F 0.50 0.35 217.60 0.40 0.0326 0.72 15.96 8.88 Speaker-F 0.39 0.25 403.92 0.21 0.0528 0.72 39.08 29.10 Seq2SPG-F 0.76 0.47 157.92 0.65 0.0228 0.74 72.43 40.94 MAML-Seq2seq 0.66 0.29 179.02 0.54 0.0109 0.70 183.05 117.09 MAML-Seq2SPG 0.71 0.40 181.56 0.73 0.0246 0.74 306.40 176.31 CMAML-Seq2SP (cid:48) G 0.64 0.32 172.92 0.76 0.0102 0.75 142.90 81.15 CMAML-Seq2SPG 0.78 0.49 185.97 0.85 0.0210 0.77 345.42 190.64 Table 1: Overall performance in Persona-chat (top) and MojiTalk (bottom) dataset in terms of quality (Human, Perplexity, BLEU), diversity (Dist-1), task consistency (Human, C score, E-acc), structure differences among tasks (Diff Score ( 10 10 )), model change after adaptation ( score ( 10 10 )).", "ateness of the proposed model structure.", "Finetune methods are better than Pretrain-Only methods in most cases.", "MAML methods have no better performance on BLEU scores than Finetune methods but have relatively higher Dist-1 scores.", "This indicates that MAML helps to boost response diversity.", "Enhanced with the proposed pruning algorithm, we can see great improvement for CMAML methods against all the competing methods on both quality and diversity measurements.", "Particularly, our full model CMAML-Seq2SPG shows clearly better performance and the reasons can be ascribed to two aspects: firstly, the proposed Seq2SPG has a better model structure for our task and secondly, the pruning algorithm makes the models more likely to generate a user-coherent response.", "Most of the performance of the competing methods in the MojiTalk dataset is similar to the Persona-chat dataset, while one difference is that Speaker achieves the highest Dist-1 score among all the methods.", "By carefully analyzing the generated cases, we find all non-meta-learning methods ( Pretrain-Only and Finetune ) consistently produce random word sequences, which means they completely fail in the few-shot setting on this task.", "However, meta-learning-based methods survive.", "Task Consistency.", "On both datasets, Finetune methods make no significant differences on C score, E-acc and Task Consistency when compared with Pretrain-Only methods, which means that simple fine-tuning is useless for improving the task consistency.", "All meta-learning methods including MAML and CMAML outperforms Finetune .", "Compared with MAML-Seq2seq and MAML-Seq2SPG , CMAML-Seq2SPG obtain 22.2%/12.5% and 11.8%/5.6% improvement on C score and E-acc.", "It means that the private modules in CMAML-Seq2SPG are well pruned to better well describes the unique characteristics of each task.", "We also observe that in MojiTalk, CMAML-Seq2SPG achieves good improvement compared with other baselines on the BLEU score but a limited improvement on E-acc and task consistency score when compared with Persona-chat.", "This tells that when the training data is limited, the generative models tend to focus on the correctness of the response rather than the task consistency.", "By jointly analyzing the response quality and task consistency measurement, we can easily draw the conclusion that the responses produced by our algorithm in CMAML-Seq2SPG not only is supe-rior in response quality but also caters to the characteristics of the corresponding task.", "Model Differences.", "Even though a high difference score among tasks does not indicate each model has captured its unique characteristics, a set of models that can capture the characteristics of themselves will have a higher different score.", "Hence, we present the difference scores of competing methods as a reference index.", "In Table 1, we can see that fine-tuning on non-meta-learning methods ( Pretrain-Only and Finetune ) does not boost the model differences between tasks.", "MAML helps to increase the model differences but is not as good as the proposed CMAML methods.", "CMAML-Seq2SPG achieves the highest model difference scores on two datasets as it distinguishes different tasks in both parameter and model structure level.", "A higher score of a method means its produced dialogue models are more easy to fine-tune.", "All non-meta-learning methods have so much lower scores than MAML methods.", "CMAML-Seq2SPG has the highest scores on both datasets, indicating that the active edges in the private module are more likely to be fine-tuned to better fit the corpus of the corresponding tasks.", "We also observe that CMAML-Seq2SP (cid:48) G has relatively low scores, which indicates its base generation model Seq2S (cid:48) G is not as good as Seq2SPG .", "We further examine two factors that may have a great impact on the performance: the quantity of training data and the similarity among tasks.", "Few-shot Settings.", "We only use Persona-chat dataset for analysis, because MojiTalk has too lit-tle data to further decrease.", "In Persona-chat, each user has 121 training samples on average, and we evaluate all the methods in a 100 and 110 samples setting (both in train and test) in Table 2 because all the methods tend to produce random sequences when each task contains less than 100 samples.", "Pretrain-Only and Finetune , the quality scores improve as the quantity of training data increases, while the C scores almost remain the same as these methods are not sensitive to the differences among tasks.", "MAML methods have not changed too much on BLEU scores along with the data growth, but its C scores keep increasing.", "Both the BLEU score and C score of CMAML-Seq2SPG keep increasing with the data growth, and it always achieves the best performance among all the tasks.", "This proves that the customized generative models are suitable for the corresponding tasks and can always take the full potential of the training data.", "Task Similarity.", "Again, we only use the Persona-chat dataset because we cannot define similarities among emojis.", "We construct two datasets: one contains 100 similar users and another contains 100 dissimilar users (both in train and test).", "The performance of all the methods is close to each other in the similar-user setting.", "It means meta-learning-based methods have no advantage for similar tasks.", "In the dissimilar-users setting, CMAML-Seq2SPG performs best on the C score and BLEU.", "We draw a conclusion that user similarity influences the performance of our model.", "Compared to that in dissimilar-users setting, the BLEU in the similar-users setting is high, but the C score is low.", "The possible reason is that generative models do not distinguish similar tasks and regard all tasks as one task in training.", "We only present one case in the Persona-chat dataset due to the limited space in Table 3.", "Pretrain-Only and Finetune methods produce general responses with less information.", "MAML methods tend to generate diverse responses as their initial parameters are easier to be finetuned.", "Even though the user profiles are not used for training, CMAML-Seq2SPG can quickly learn the persona information pediatrician from its training dialogues while other baselines can not.", "From another perspective, the pruned private module in CMAML-Seq2SPG can be regarded as a special memory that stores the task-specific information without explicit definition of memory cells.", "In this paper, we address the problem of the few-shot dialogue generation.", "We propose CMAML, which is able to customize unique dialogue models for different tasks.", "CMAML introduces a private network for each task's dialogue model, whose structure will evolve during the training to better fit the characteristics of this task.", "The private module will only be trained on the corpora of the corresponding task and its similar tasks.", "The experiment results show that CMAML achieves the best performance in terms of response quality, diversity and task consistency.", "We also measure the model differences among tasks, and the results prove that CMAML produces diverse dialogue models for different tasks.", "This paper is partially supported by National Key Research and Development Program of China with Grant No. 2018AAA0101900/2018AAA0101902, Beijing Municipal Commission of Science and Technology under Grant No.", "Z181100008918005, and the National Natural Science Foundation of China (NSFC Grant No. 61772039, No. 91646202, and No. 61876196)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "other", "other" ]
[ "The ability to generate clarification questions i.e., questions that identify useful missing information in a given context, is important in reducing ambiguity.", "Humans use previous experience with similar contexts to form a global view and compare it to the given context to ascertain what is missing and what is useful in the context.", "Inspired by this, we propose a model for clarification question generation where we first identify what is missing by taking a difference between the global and the local view and then train a model to identify what is useful and generate a question about it.", "Our model outperforms several baselines as judged by both automatic metrics and humans.", "An important but under-explored aspect of text understanding is the identification of missing information in a given context i.e., information that is essential to accomplish an underlying goal but is currently missing from the text.", "Identifying such missing information can help to reduce ambiguity in a given context which can aid machine learning models in prediction and generation (De Boni and Manandhar, 2003; Stoyanchev et al., 2014).", "Rao and Daum III (2018, 2019) recently proposed the task of clarification question generation as a way to identify such missing information in context.", "They propose a model for this task which while successful at generating fluent and relevant questions, still falls short in terms of usefulness and identifying missing information.", "With the advent of large-scale pretrained generative models (Radford et al., 2019; Lewis et al., 2019; Raffel et al., 2019), generating fluent and coherent text is within reach.", "However, generating clarification questions requires going beyond fluency and relevance.", "Doing so requires understanding what is missing, which if included could be useful to the consumer of the information.", "TITLE : Sony 18x Optical Zoom 330x Digital Zoom Hi8 Camcorder DESC : Sony Hi-8mm Handycam Vision camcorder 330X digital zoom, Nightshot(TM) Infrared 0 lux system, Special Effects, 2.5\" SwivelScreen color LCD and 16:9 recording mode, Laserlink connection.", "Image Stabilization, remote, built in video light.", "QUESTION : Can I manually control the video quality?", "Humans are naturally good at identifying missing information in a given context.", "They possibly make use of global knowledge i.e., recollecting previous similar contexts and comparing them to the current one to ascertain what information is missing and if added would be the most useful .", "Inspired by this, we propose a two-stage framework for the task of clarification question generation.", "Our model hinges on the concept of a schema which we define as the key pieces of information in a text.", "In the first stage, we find what's missing by taking a difference between the global knowledge's schema and schema of the local context (3.1).", "In the second stage we feed this missing schema to a fine-tuned BART (Lewis et al., 2019) model to generate a question which is further made more useful using PPLM (Dathathri et al., 2019) (3.2).", "1 We test our proposed model on two scenarios (2): community-QA , where the context is a product description from amazon.com (McAuley and Yang, 2016) (see e.g. Table 1); and dialog where the context is a dialog history from the Ubuntu Chat forum (Lowe et al., 2015).", "We compare our model to several baselines (4.2) and evaluate outputs using both automatic metrics and human evaluation to show that our model significantly outperforms baselines in generating useful questions that identify missing information in a given context (4.4).", "Furthermore, our analysis reveals reasoning behind generated questions as well as robustness of our model to available contextual information.", "(5).", "Rao and Daum III (2018) define the task of clarification question generation as: given a context, generate a question that identifies missing information in the context.", "We consider two scenarios: Community-QA Community-driven question-answering has become a common venue for crowd-sourcing answers.", "These forums often have some initial context on which people ask clarification questions.", "We consider the Amazon question-answer dataset (McAuley and Yang, 2016) where context is a product description and the task is to generate a clarification question that helps a potential buyer better understand the product.", "Goal Oriented Dialog With the advent of high quality speech recognition and text generation systems, we are increasingly using dialog as a mode to interact with devices (Clark et al., 2019).", "However, these dialog systems still struggle when faced with ambiguity and could greatly benefit from having the ability to ask clarification questions.", "We explore such a goal-oriented dialog scenario using the Ubuntu Dialog Corpus (Lowe et al., 2015) consisting of dialogs between a person facing a technical issue and another person helping them resolve the issue.", "Given a context i.e a dialog history, the task is to generate a clarification question that would aid the resolution of the technical issue.", "Figure 1 depicts our approach at a high level.", "We propose a two-stage approach for the task of clarification question generation.", "In the first stage, we identify the missing information in a given context.", "For this, we first group together all similar contexts in our data 2 to form the global schema for each high-level class.", "Next, we extract the schema of the given context to form the local schema .", "Finally, we take a difference between the local schema and the global schema (of the class to which the context belongs) to identify the missing schema for the given context.", "In the second stage, we train a model to generate a question about the most useful information in the missing schema.", "For this, we fine-tune a BART model (Lewis et al., 2019) on (missing schema, question) pairs and at test time, we use PPLM (Dathathri et al., 2019) with a usefulness classifier as the attribute model to generate a useful question about missing information.", "4302", "We then compute its missing schema by taking the set difference between the global schema of class K and the local schema of the context c : missing _ schema c = global K \\ local c (4) More specifically, we start with the elements in the global schema and remove elements that have a semantic match (see appendix) with any element in the local schema to obtain the missing schema.", "question to improve performance of a Question-Answering system, we see the need of identifying important elements in a context to ask a better question.", "We define schema of sentence s as set consisting of one or more triples of the form (key-phrase, verb, relation) and/or one or more key-phrases.", "schema s = { element } ; where element { ( key phrase , verb , relation ) , key phrase } (1) Schema Extraction Our goal is to extract a schema from a given context.", "We consider (key-phrase, action verb, relation) as the basic element of our schema.", "Such triples have been found to be representative of key information in previous work (Vedula et al., 2019).", "Given a sentence from the context, we first extract bigram and unigram key-phrases using YAKE (Yet-Another-Keyword-Extractor) (Campos et al., 2020) and retain only those that contain at least a noun.", "We then obtain the dependency parse tree (Qi et al., 2020b) of the sentence and map the key-phrases to tree nodes.", "3 Now, to obtain the required triple, we need to associate a verb and a relation to each key-phrase.", "This procedure is described in Alg 1. At a high-level, we use the path between the key-phrase and the closest verb in the dependency tree to establish a relation between the key-phrase and the verb.", "In cases where there is no path, we use only the key-phrase as our schema element.", "Figure 2 shows an example dependency tree for a sentence.", "Creating local schema Given a context, we extract a schema for each sentence in the context.", "The local schema of a context c is a union of schemata of each sentence s in the context.", "local _ schema c = s c schema s (2) 3 In the case of bigram phrases, we merge the tree nodes.", "Creating global schema We define global schema at the class level where a class' is a group of similar contexts.", "For Amazon , classes consist of groups of similar products and for Ubuntu , classes consist of groups of similar dialogs (see 4.1 for details).", "The global schema of a class K is a union of local schemata of all contexts c belonging to K .", "A naive union of all local schemata can result in a global schema that has a long tail of low-frequency schema elements.", "Moreover, it may have redundancy where schema elements with similar meaning are expressed differently (e.g. OS and operating system ).", "We therefore use word embedding based similarity to group together similar key-phrases and retain only the most frequent elements (see appendix).", "Creating a missing schema Given a context c , we first determine the class K to which the context belongs.", "Our goal is to generate a useful question about missing information.", "In 3.1, we explained how we compute the missing schema for a given context; here we describe how we train a model to generate a useful question given the missing schema.", "BART-based generation model Our generation model is based on the BART (Lewis et al., 2019) encoder-decoder model, which is also a state-of-the-art model in various generation tasks including dialog generation and summarization.", "We start with the pretrained base BART model consisting of a six layer encoder and six layer decoder.", "We finetune this model on our data where the inputs are the missing schema and the output is the question.", "The elements of the missing schema in the input are separated by a special [SEP] token.", "Since the elements in our input do not have any order, we use the same positional encoding for all input positions.", "We use a token type embedding layer with three types of tokens: key-phrases, verbs, and relations.", "PPLM-based decoder We observed during our human evaluation 4 that a BART model fine-tuned in this manner, in spite of generating questions that ask about missing information, does not always generate useful questions.", "We therefore propose to integrate the usefulness criteria into our generation model.", "We use the Plug-and-Play-Language-Model (PPLM) (Dathathri et al., 2019) during decoding (at test time).", "The attribute model of the PPLM in our case is a usefulness classifier trained on bags-of-words of questions.", "In order to train such a classifier, we need usefulness annotations on a set of questions.", "For the Amazon dataset, we collect usefulness scores (0 or 1) on 5000 questions using human annotation whereas for the Ubuntu dataset we assume positive labels for (true context, question) pairs and negative labels for (random context, question) pairs and use 5000 such pairs to train the usefulness classifier.", "Details of negative sampling for Ubuntu dataset is in Appendix.", "We aim to answer the following research questions (RQ):", "1. Is the model that uses missing schema better at identifying missing information compared to models that use the context directly to generate questions?", "2. Do large-scale pretrained models help generate better questions?", "3. Does the PPLM-based decoder help increase the usefulness of the generated questions?", "Amazon The Amazon review dataset (McAuley et al., 2015) consists of descriptions of products on amazon.com and the Amazon question-answering dataset (McAuley and Yang, 2016) consists of questions (and answers) asked about products.", "Given a product description and N questions asked about the product, we create N instances of ( context , question ) pairs where context consists of the description and previously asked questions (if any).", "We use the Electronics category consisting of 23,686 products.", "We split this into train, validation and test sets (Table 2).", "The references for each context are all the questions (average=6) asked about the product.", "A class is defined as a group of products within a subcategory (e.g. DSLR Camera) as defined in the dataset.", "We restrict a class to have at most 400 products, and a bigger subcategory is broken into lower-level subcategories (based on the product hierarchy) resulting in 203 classes.", "While creating global schema, we exclude target questions from validation and test examples.", "The product descriptions and associated metadata come as inputs during test time.", "Hence, including them from all splits while creating the global schema does not expose the test and validation targets to the model during training.", "Ubuntu The Ubuntu dialog corpus (Lowe et al., 2015) consists of utterances of dialog between two users on the Ubuntu chat forum.", "Given a dialog, we identify utterances that end with a question mark.", "We then create data instances of (context, question) where the question is the utterance ending with a question mark and the context consists of all utterances before the question.", "We consider only those contexts that have at least five utterances and at most ten utterances.", "Table 2 shows the number of data instances in the train, validation and test splits.", "Unlike the Amazon dataset, each context has only one reference question.", "A class is defined as a 4304 group of dialogs that address similar topics.", "Since such class information is not present in the dataset, we use k -means to cluster dialogs into subsequent classes.", "Each dialog was represented using a TF-IDF vector.", "After tuning the number of clusters based on sum of squared distances of dialogs to their closest cluster center, we obtain 26 classes.", "We follow a similar scheme as with Amazon for not including target questions from validation and test sets while building the global schema.", "Retrieval We retrieve the question from the train set whose schema overlaps most with the missing schema of the given context.", "GAN-Utility The state-of-the-art model for the task of clarification question generation (Rao and Daum III, 2019) trained on (context, question, answer) triples.", "Transformer A transformer (Vaswani et al., 2017) 5 model trained on (context, question) pairs.", "BART We finetune a BART model (Lewis et al., 2019) on (context, question) pairs.", "BART + missinfo We compare to a BART model fine-tuned on (missing schema, question) pairs.", "BART + missinfo + WD This is similar to the BART + missinfo baseline with the modification that, at test time only, we use a weighted-decoding (WD) strategy (Ghazvininejad et al., 2017) by re-defining the probability of words in the vocabulary using usefulness criteria (more in appendix).", "BART + missinfo + PPLM This is our proposed model as described in 3 where we fine-tune the BART model on (missing schema, question) pairs and use a usefulness classifier based PPLM model for decoding at test time.", "BLEU-4 (Papineni et al., 2002) evaluates 4-gram precision between model generation and references.", "at the corpus level; METEOR (Banerjee and Lavie, 2005) additionally uses stem and synonym matches for similarity; and Distinct-2 (Li et al., 2016) measures diversity by calculating the number of distinct bigrams in model generations scaled by the total number of generated tokens.", "Similar to Rao and Daum III (2019), we conduct a human evaluation on Amazon Mechanical Turk to evaluate model generation on the four criteria below.", "Each generated output is shown with the context and is evaluated by three annotators.", "Relevance We ask Is the question relevant to the context? and let annotators choose between Yes (1) and No (0).", "Fluency We ask Is the question grammatically well-formed i.e. a fluent English sentence? and let annotators choose between Yes (1) and No (0).", "Missing Information We ask Does the question ask for new information currently not included in the context? and let annotators choose between Yes (1) and No (0).", "Usefulness We perform a comparative study where we show annotators two model-generated questions (in a random order) along with the context.", "For Amazon, we ask Choose which of the two questions is more useful to a potential buyer of the product .", "For Ubuntu, we ask Choose which of the two questions is more useful to the other person in the dialog .", "Amazon Table 3 shows automatic metric results on Amazon.", "Under BLEU-4 and METEOR, the retrieval model performs the worst suggesting that picking a random question that matches the most with the missing schema does not always yield a good question.", "This strengthens the need of the second stage of our proposed model i.e. BART + PPLM based learning.", "GAN-Utility, which is state-of-the-art on Amazon, outperforms the Transformer baseline suggesting that training a larger model (in terms of the number of parameters) does not always yield better questions.", "BART, on the other hand, outperforms GAN-Utility suggesting the benefit of large-scale pretraining (RQ2).", "BART+missinfo further outperforms BART showing the value in training on missing schemata instead of training directly on the context (RQ1).", "A variation of this model that uses weighted decoding performs marginally better on METEOR but slightly worse of BLEU-4.", "Our final proposed model i.e., BART+missinfo+PPLM performs the best among all baselines across both BLEU-4 and METEOR.", "4305", "Ubuntu Table 6 shows the results of human judgments on the model generations of 150 randomly sampled dialog contexts from the Ubuntu test set.", "In terms of relevance, we find that the transformer and BART baselines produce less relevant 4306 Model Relevancy Fluency MissInfo GAN-Utility 0.9 0.86 0.81 BART 0.94 0.92 0.77 + missinfo 0.97 0.92 0.87 + missinfo + PPLM 0.99 0.93 0.89 Reference 0.96 0.83 0.89 Table 5: Human judgment results (0-1) on 300 randomly sampled descriptions from the Amazon test set Model Relevancy Fluency MissInfo Transformer 0.74 0.99 0.99 BART 0.69 0.99 0.96 + missinfo 0.81 0.95 0.98 + missinfo + PPLM 0.91 0.83 0.99 Reference 0.85 0.83 0.96 Table 6: Human judgment results (0-1) on 150 randomly sampled dialog contexts from Ubuntu test set", "produces the most diverse questions (as also observed by Rao and Daum III (2019)) since it selects among human written questions which tend to be more diverse compared to model generated ones.", "Among other baselines, transformer interestingly has the lowest diversity whereas GAN-Utility and BART come very close to each other.", "Model ablations that use missing schema produce more diverse questions further strengthening the importance of training on missing schema.", "Our model i.e., BART+missinfo+PPLM, in spite of outperforming all baselines (except retrieval), is still far from reference questions in terms of diversity, suggesting room for improvement.", "Ubuntu Table 4 shows the results of automatic metrics on Ubuntu.", "6 The overall BLEU-4 and METEOR scores are much lower compared to Amazon since Ubuntu has only one reference per context.", "Under BLEU-4 and METEOR scores, similar to Amazon, we find that the retrieval baseline has the lowest scores.", "Transformer baseline outperforms the retrieval baseline but lags behind BART, again showing the importance of large-scale pretraining.", "The difference between the BLEU-4 scores of BART+missinfo and our final proposed model is not significant but their METEOR score difference is significant suggesting that our model produces questions that may be lexically different from references but have more semantic overlap with the reference set.", "Under Distinct-2 scores, we find the same trend as in Amazon, with the retrieval model being the most diverse and our final model outperforming all other baselines.", "Amazon Table 5 shows the human judgment results on model generations for 300 randomly", "sampled product descriptions from the Amazon test set.", "Under relevancy and fluency, all models score reasonably with our proposed model producing the most relevant and fluent questions.", "Under missing information, the BART model, fine-tuned on context instead of missing schema, has the lowest score.", "GAN-Utility outperforms BART but significantly lags behind BART+missinfo and BART+missinfo+PPLM reaffirming our finding from the automatic metric results that our idea of feeding missing schema to a learning model helps.", "We additionally observe that the human-written questions score lower than model-generated questions under fluency' and missing information' criteria, mirroring similar observations from (Rao and Daum III, 2018, 2019).", "We believe the reason for this is that human-written questions often have typos or are written by non-native speakers (leading to lower fluency).", "Moreover, humans may miss out on reading full product descriptions causing them to ask about details that are already included in the description (leading to lower missing information scores).", "Figure 3a shows the results of pairwise comparison on the usefulness criteria.", "We find that our model wins over GAN-Utility by a significant margin with humans preferring our model-generated questions 77% of the time.", "Our model also beats BART-baseline 66% of the time further affirming the importance of using missing schema.", "Finally, our model beats BART+missinfo model 61% of the time suggesting that the PPLM-based decoder that uses usefulness classifier is able to produce much more useful questions (RQ3).", "The annotator agreement statistics are provided in appendix.", "5 Analysis Robustness to input information We analyze how a model is robust toward the amount of information present.", "To measure the amount of information, we look for context length (description length for Amazon, dialog context length for Ubuntu) and the size of global schema since these two directly control how much knowledge regarding potential missing information is available to the model.", "questions.", "With the addition of missing schema (i.e., BART+missinfo), the questions become more relevant and our proposed model obtains the highest relevance score.", "The reference obtains slightly a lower relevance score which can possibly be explained by the fact that humans sometimes digress from the topic.", "Under fluency, interestingly, the transformer and BART baselines obtain high scores.", "With the addition of missing schema, fluency decreases and the score reduce further with the PPLM model.", "We suspect that the usefulness classifier trained with a negative sampling strategy (as opposed to human labelled data, as in Amazon) contributes to fluency issues.", "Under missing information, all models perform well which can be explained by the fact that in Ubuntu, the scope of missing information is much larger (since dialog is much more open-ended) than in Amazon.", "Figure 3b shows the results of pairwise comparison on usefulness criteria.", "We find that humans choose our model-generated questions 85% of time when compared to either transformer or BART generated questions.", "When compared to BART+missinfo, our model is selected 71% of the time, further affirming the importance of using the PPLM-based decoder.", "We measure the difference in BLEU score between two groups of data samples where context length/size of global schema is either high or low.", "Figure 5 shows that our model is the least variant toward the information available hence more robust for the Amazon dataset.", "7 Owing to our modular approach for estimating missing information, we seek to analyze whether a question is really asking about missing information in an automatic fashion.", "This also allows us to explain the reasoning behind a particular generation as we are able to trace back to the particular missing information that is used to generate the question.", "We run a YAKE extractor on the generated questions to obtain key-phrases.", "We calculate the ratio between the number of key-phrases in the output that belong to the original missing schema and the total number of key-phrases present in the output.", "Table 8 shows that when we use our framework of estimating missing information coupled with BART, both models achieve very high missing information overlap, thus suggesting that we can obtain the reasoning behind a generated question reliably by tracing the missing information overlap, as shown in Table 9.", "7 Ubuntu follows similar trends; figure in appendix.", "4307 Amazon Category Binoculars & Scopes Title Nikon 7239 Action 7x50 EX Extreme All-Terain Binocular Description The Monarch ATB 42mm with dielectric high-reflective Multilayer Prism coating binocular features brighter, sharper colors, crisp and drastically improved low-light performance.", "6 Related Work Most previous work on question generation focused on generating reading comprehension style 4308 questions i.e., questions that ask about information present in a given text (Duan et al., 2017; Zhang and Bansal, 2019).", "Question length We also observe in Table 9 that baseline models tend to generate short and generic questions as compared to our model that often chooses longer schema key-phrases (e.g. bigrams) to generate a more specific question.", "We further looked into annotated (for usefulness) questions from the Amazon dataset and we observed that 70% of questions that were annotated as useful are longer than not-useful questions.", "The average length of gold useful questions is 10.76 words and 8.21 for not-useful questions.", "The average length of generated questions for BART, BART+MissInfo and BART+MissInfo+PPLM (ours) are 5.6, 6.2, 12.3 respectively.", "We also find a similar trend in the Ubuntu dataset as well.", "Dynamic expansion of global schema We anticipate that even if we build the global schema from the available offline dataset, it is possible that new entries may appear in a real application.", "We investigate how our framework responds to the dynamic expansion of global schema.", "We simulate a scenario where we extend the Laptop Acces-Figure 4: Average BLEU score difference between classes having longer ( > 200 (median) words) and shorter descriptions; larger ( > 200 (median) key-phrases) and shorter global schema for the Amazon dataset. Lower differences indicate more invariance toward the available information. sories category in the Amazon dataset, with 100 new products (those that appeared on Amazon.com after the latest entry in the dataset).", "We obtain key-phrases from their product descriptions and include them in the global schema for the category which amounts to a 21% change in the existing global schema.", "For 50 random products in the test set from the same category, we found that in 28 out of 50 cases (56%), the model picked a new schema element that is added later.", "This indicates that our framework is capable of supporting dynamic changes in the global schema and reflecting them in subsequent generations without retraining from scratch.", "Later, Rao and Daum III (2018, 2019) introduced the task of clarification question generation in order to ask questions about missing information in a given context.", "ClarQ (Kumar and Black, 2020) entails clarification questions in a question answering setup.", "However, unlike our work, these works still suffer from estimating the most useful missing information.", "Recent works on conversational question answering also focused on the aspect of question generation or retrieval (Choi et al., 2018; Aliannejadi et al., 2019).", "Qi et al. (2020a) especially focused on generating information-seeking questions while Majumder et al. (2020) proposed a question generation task in free-form interview-style conversations.", "In this work, in addition to improving clarification question generation in a community-QA dataset, we are the first to explore a goal-oriented dialog scenario as well.", "Representing context and associated global information in a structure format has been shown to improve performance in generation task (Das et al., 2019; Subramanian et al., 2018; Khashabi et al., 2017) in general and summarization (Fan et al., 2019) and story-generation (Yao et al., 2019) in particular.", "We also derive inspiration from recent works on information extraction from free-form text (Vedula et al., 2019; Stanovsky et al., 2016) and develop a novel framework to estimate missing information from available natural text contexts.", "Finally, for question generation, we use BART (Lewis et al., 2019), that is state-of-the-art for many generation tasks such as summarization, dialog generation etc.", "Furthermore, inspired from recent works that use controlled language generation during decoding (Ghazvininejad et al., 2017; Holtz-man et al., 2018), we use Plug-and-Play-Language-Model (Dathathri et al., 2019) to tune generations during decoding.", "While similar approaches for controllable generation (Keskar et al., 2019; See et al., 2019) have been proposed, we extend such efforts to enhance the usefulness of the generated clarification questions.", "We propose a model for generating useful clarification questions based on the idea that missing information in a context can be identified by taking a difference between the global and the local view.", "We show how we can fine-tune a large-scale pretrained model such as BART on such differences to generate questions about missing information.", "Further, we show how we can tune these generations to make them more useful using PPLM with a usefulness classifier as its attribute model.", "Thorough analyses reveal that our framework works across domains, shows robustness towards information availability, and responds to the dynamic change in global knowledge.", "Although we experiment only with Amazon and Ubuntu datasets, our idea is generalizable to scenarios where it is valuable to identify missing information such as conversational recommendation, or eliciting user preferences in a chit-chat, among others.", "Acknowledgements We thank everyone in the Natural Language Processing Group at Microsoft Research, Redmond, with special mention to Yizhe Zhang, Bill Dolan, Chris Brockett, and Matthew Richardson for their critical review of this work.", "We also thank anonymous reviewers for providing valuable feedback.", "In addition to this, we want to acknowledge human annotators from Amazon Mechanical Turk for data annotation and human evaluation of our systems.", "BPM is partly supported by a Qualcomm Innovation Fellowship and NSF Award #1750063.", "Findings and observations are of the authors only and do not necessarily reflect the views of the funding agencies.", "We do not foresee any immediate ethical concerns since we assume that our work will be restricted in domain as compared to free-form language generation.", "We still cautiously advise any developer who wishes to extend our system for their own use-case (beyond e-commerce, goal-oriented conversations) to be careful about curating a global pool of knowledge for data involving sensitive user information.", "Finally, since we are finetuning a pretrained generative model, we inherit the general risk of generating biased or toxic language, which should be carefully filtered.", "In general, we expect users to benefit from our system by reducing ambiguity (when information is presented in a terse fashion, e.g. in a conversation) and improving contextual understanding to enable them to take more informed actions (e.g. making a purchase)." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "method", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain" ]