title
stringlengths 15
188
| abstract
stringlengths 400
1.8k
| introduction
stringlengths 9
10.5k
| content
stringlengths 778
41.9k
| abstract_len
int64 400
1.8k
| intro_len
int64 9
10.5k
| abs_len
int64 400
1.8k
|
---|---|---|---|---|---|---|
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection | An important task for designing QA systems is answer sentence selection (AS2): selecting the sentence containing (or constituting) the answer to a question from a set of retrieved relevant documents. In this paper, we propose three novel sentence-level transformer pre-training objectives that incorporate paragraph-level semantics within and across documents, to improve the performance of transformers for AS2, and mitigate the requirement of large labeled datasets. Specifically, the model is tasked to predict whether: (i) two sentences are extracted from the same paragraph, (ii) a given sentence is extracted from a given paragraph, and (iii) two paragraphs are extracted from the same document. Our experiments on three public and one industrial AS2 datasets demonstrate the empirical superiority of our pre-trained transformers over baseline models such as RoBERTa and ELECTRA for AS2. | Question Answering (QA) finds itself at the core of several commercial applications, for e.g., virtual assistants such as Google Home, Alexa and Siri. Answer Sentence Selection (AS2) is an important task for QA Systems operating on unstructured text such as web documents. When presented with a set of relevant documents for a question (retrieved from a web index), AS2 aims to find the best answer sentence for the question. The recent popularity of pre-trained transformers AS2 is a knowledge-intensive complex reasoning task, where the answer candidates for a question can stem from multiple documents, possibly on different topics linked to concepts in the question. While there have been recent works Furthermore, obtaining high quality human labeled examples for AS2 is expensive and time consuming, due to the large number of answer candidates to be annotated for each question. Domainspecific AS2 datasets such as WikiQA Towards improving the downstream performance of pre-trained transformers for AS2 and mitigating the requirement of large scale labeled data for fine-tuning, we propose three novel sentencelevel transformer pre-training objectives, which can incorporate paragraph-level semantics across multiple documents. Analogous to the sentence-pair nature of AS2, we design our pre-training objectives to operate over a pair of input text sequences. The model is tasked with predicting: (i) whether the sequences are two sentences extracted from the same paragraph, (ii) whether the first sequence is a sentence that is extracted from the second sequence (paragraph), and (iii) whether the sequences are two paragraphs belonging to the same document. We evaluate our paragraph-aware pre-trained transformers for AS2 on three popular public datasets: ASNQ, WikiQA and TREC-QA; and one industrial QA benchmark 1 . Results show that our pre-training can improve the performance of finetuning baseline transformers such as RoBERTa and ELECTRA on AS2 by ∼3-4% points without requiring any additional data (labeled/unlabeled). | Answer Sentence Selection (AS2) Earlier approaches for AS2 used CNNs Paragraph/Document-level Semantics Transformers for Long Inputs Longformer In this section we formally define the task of AS2. Given a question q and a set of answer candidates A={a 1 , . . ., a n }, the objective is to select the candidate ā ∈ A that best answers q. AS2 can be modeled as a ranking task over A to learn a scoring function f : Q×A → R that predicts the probability f (q, a) of an answer candidate a being correct. The best answer ā corresponds to argmax n i=1 f (q, a i ). Pre-trained transformers are used as QA pair encoders for AS2 to approximate the function f . Spans in Same Paragraph (SSP) Given two sequences (A, B) as input to the transformer, the objective is to predict if A and B belong to the same paragraph in a document. To create positive pairs (A, B), given a document D, we extract two small, contiguous and disjoint subsets of sentences to be used as A and B from a single paragraph P i ∈ D. To create negative pairs, we sample spans of sentences B ′ from different paragraphs P j , j ̸ = i in the same document D (hard negatives) and also from different documents (easy negatives). The negative pairs correspond to (A, B ′ ). Posing the above pre-training objective in terms of spans (instead of sentences) allows for modifying the lengths of the inputs A, B (by changing number of sentences ∈A, B). When fine-tuning transformers for AS2, typically the question is provided as the first input and a longer answer candidate/paragraph is provided as the second input. For our experiments (Sec 5), we use a longer span for input B than A. Span in Paragraph (SP) Given two sequences (A, B) as input to the transformer, the objective is to predict if A is a span of text extracted from a paragraph B in a document. To create positive pairs (A, B), given a paragraph P i in a document D, we extract a small contiguous span of sentences A from it and create the input pair as (A, P i \ A). To create negative pairs, we select other paragraphs P j , j ̸ = i in the same document D and remove a randomly chosen span A ′ from each of them. The negative pairs correspond to (A, P j \ A ′ ). This is necessary to ensure that the model does not simply recognize whether the second input is a complete paragraph or a clipped version. To create easy negatives, we use the above approach for paragraphs P j sampled from documents other than D. Paragraphs in Same Document (PSD) Given two sequences (A, B) as input to the transformer, the objective is to predict if A and B are paragraphs belonging to the same document. To create positive pairs (A, B), given a document D k , we randomly select paragraphs P i , P j ∈ D k and obtain a pair (P i , P j ). To create negative pairs, we randomly select P ′ j / ∈ D k , and obtain a pair (P i , P ′ j ). Pre-training To eliminate any improvements stemming from the usage of more data, we perform pre-training on the same corpora as RoBERTa: English Wikipedia, the BookCorpus, OpenWeb-Text and CC-News. We perform continuous pretraining starting from RoBERTa AS2 Fine-tuning We consider three public and one industrial AS2 benchmark as fine-tuning datasets for AS2 (statistics presented in Appendix A). We use standard evaluation metrics for AS2: Mean Average Precision (MAP), Mean Reciprocal Recall (MRR) and Precision@1 (P@1). • ASNQ is a large-scale AS2 dataset (Garg et al., 2020) with questions from Google search engine queries, and answer candidates extracted from a Wikipedia page. ASNQ is a modified version of the Natural Questions (NQ) • WikiQA is a popular AS2 dataset We remove both the (all-) and (all+) questions for our experiments (standard "clean" setting). • TREC-QA is a popular AS2 dataset We present results of our pre-trained models on the AS2 datasets in Table For questions in ASNQ and WikiQA, all candidate answers are extracted from a single Wikipedia document, while for TREC-QA and WQA, candidate answers come from multiple documents extracted from heterogeneous web sources. By design of our objectives SSP, SP and PSD, they perform differently when fine-tuning on different datasets. For example, SSP aligns well with ASNQ and Wik-iQA as they contain many negative candidates, per question, extracted from the same document as the positive (i.e, 'hard' negatives). As per our design of the SSP objective, for every positive sequence pair, we sample 2 'hard' negatives coming from the same document as the positive pair. The presence of hard negatives is of particular importance for WikiQA and ASNQ, as it forces the models to learn and contrast more subtle differences between answer candidates, which might likely be more related as they come from the same document. On the other hand, PSD is designed so as to see paragraphs from same or different documents (with no analogous concept of 'hard' negatives of SSP and SP). For this reason, PSD is better aligned for fine-tuning on datasets where candidates are extracted from multiple documents, such as WQA and TREC-QA. Comparison with TANDA For RoBERTa, our pre-trained models can surprisingly improve/achieve comparable performance to TANDA. Note that our models achieve this performance without using the latter's additional ∼20M labeled ASNQ QA pairs. This lends support to our pretraining objectives mitigating the requirement of large scale labeled data for AS2 fine-tuning. For ELECTRA, we only observe comparable performance to TANDA for WQA and TREC-QA. Ablation: MLM-only Pre-training To mitigate any improvements stemming from the specific data sampling techniques used by our objectives, we pretrain 3 models (starting from RoBERTa-Base) with the same data sampling as each of the SSP, SP and PSD models, but only using the MLM objective. We report results in Table Ablation: Pre-training Task 'Difficulty' We evaluate the pre-trained models (after convergence) on their specific tasks over the validation split of Wikipedia (to enable evaluating baselines such as BERT and ALBERT). Table The results show that our objectives are generally harder than NSP (Next Sentence Prediction by On the other hand, our pre-training objectives are "more challenging" than these previously proposed objectives due to the requirement of reasoning over multiple paragraphs and multiple documents, addressing same or different topics at the same time. In fact, Table In this paper we have presented three sentencelevel pre-training objectives for transformers to incorporate paragraph and document-level semantics. Our objectives predict whether (i) two sequences are sentences extracted from the same paragraph, (ii) first sequence is a sentence extracted from the second, and (iii) two sequences are paragraphs belonging to the same document. We evaluate our pre-trained models for the task of AS2 on four datasets. Our results show that our pre-trained models outperform the baseline transformers such as RoBERTa and ELECTRA. We only consider English language datasets for our experiments in this paper. However we hypothesize that our pre-training objectives should provide similar performance improvements when extended to other languages with limited morphology, like English. The pre-training objectives proposed in our work are designed considering Answer Sentence Selection (AS2) as the target task, and can be extended for other tasks like Natural Language Inference, Question-Question Similarity, etc. in future work. The pre-training experiments in our paper require large amounts of GPU and compute resources (multiple NVIDIA A100 GPUs running for several days) to finish the model pre-training. This makes re-training models using our pre-training approaches computationally expensive using newer data. To mitigate this, we are releasing our code and pre-trained model checkpoints at Here we present statistics and links for downloading the AS2 datasets used: ASNQ We experiment with the base architecture, which uses an hidden size of 768, 12 transformer layers, 12 attention heads and feed-forward size of 3072. We perform continued pre-training starting from the publicly released checkpoints of RoBERTa-Base The evaluation of the models is performed on four different datasets for Answer Sentence Selection. We maintain the same hyperparameters used in pre-training apart from the learning rate, the number of warmup steps and the batch size. We do early stopping on the development set if the number of non-improving validations (patience) is higher than 5. For ASNQ, we found that using a very large batch size is beneficial, providing a higher accuracy. We use a batch size of 2048 examples on ASNQ for RoBERTa models and 1024 for ELECTRA models. The peak learning rate is set to 1 * 10 -5 for all models, and the number of warmup steps to 1000. For WikiQA, TREC-QA and WQA, we select the best batch size out of {16, 32, 64} and learning rate out of {2 * 10 -6 , 5 * 10 -5 , 1 * 10 -5 , 2 * 10 -5 } using crossvalidation. We train the model for 6 epochs on ASNQ, and up to 40 epochs on WikiQA, TREC-QA, and WQA. The performance of practical AS2 systems is typically measured using Precision-at-1 P@1 (Garg and Moschitti, 2021). In addition to P@1, we also use Mean Average Precision (MAP) and Mean Reciprocal Recall (MRR) to evaluate the ranking of the set of candidates produced by the model. We used metrics from Torchmetrics Table We present some qualitative examples from the three public AS2 datasets. We highlight cases in which the baseline RoBERTa-Base model is unable to rank the correct answer in the top position, but where our model pretrained with SP is successful. The examples are provided in Table | 893 | 2,037 | 893 |
Style Transfer Through Back-Translation | Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties. Then adversarial generation techniques are used to make the output match the desired style. We evaluate this technique on three different style transformations: sentiment, gender and political slant. Compared to two state-of-the-art style transfer modeling techniques we show improvements both in automatic evaluation of style transfer and in manual evaluation of meaning preservation and fluency. | Intelligent, situation-aware applications must produce naturalistic outputs, lexicalizing the same meaning differently, depending upon the environment. This is particularly relevant for language generation tasks such as machine translation This paper introduces a novel approach to transferring style of a sentence while better preserving its meaning. We hypothesize-relying on the study of We focus on transferring author attributes: (1) gender and (2) political slant, and (3) on sentiment modification. The second task is novel: given a sentence by an author with a particular political leaning, rephrase the sentence to preserve its meaning but to confound classifiers of political slant ( §3). The task of sentiment modification enables us to compare our approach with state-of-Figure the-art models Style transfer is evaluated using style classifiers trained on held-out data. Our back-translation style transfer model outperforms the state-of-theart baselines The main contribution of this work is a new approach to style transfer that outperforms stateof-the-art baselines in both the quality of inputoutput correspondence (meaning preservation and fluency), and the accuracy of style transfer. The secondary contribution is a new task that we propose to evaluate style transfer: transferring political slant. | Given two datasets 2 } which represent two different styles s 1 and s 2 , respectively, our task is to generate sentences of the desired style while preserving the meaning of the input sentence. Specifically, we generate samples of dataset X 1 such that they belong to style s 2 and samples of X 2 such that they belong to style s 1 . We denote the output of dataset X 1 transfered to style s 2 as X1 = {x (1) 2 , . . . , x(n) 2 } and the output of dataset X 2 transferred to style s 1 as X2 = {x Figure In this section we describe how we learn the latent content variable z using back-translation. The e → f machine translation and f → e backtranslation models are trained using a sequence-tosequence framework Formally, let θ E represent the parameters of the encoder of f → e translation system. Then z is given by: where, x f is the sentence x in language f . Specifically, x f is the output of e → f translation system when x e is given as input. Since z is derived from a non-style specific process, this Encoder is not style specific. Figure We train a convolutional neural network (CNN) classifier to accurately predict the given style. We also use it to evaluate the error in the generated samples for the desired style. We train the classifier in a supervised manner. The classifier accepts either discrete or continuous tokens as inputs. This is done such that the generator output can be used as input to the classifier. We need labeled examples to train the classifier such that each instance in the dataset X should have a label in the set s = {s 1 , s 2 }. Let θ C denote the parameters of the classifier. The objective to train the classifier is given by: To improve the accuracy of the classifier, we augment classifier's inputs with style-specific lexicons. We concatenate binary style indicators to each input word embedding in the classifier. The indicators are set to 1 if the input word is present in a style-specific lexicon; otherwise they are set to 0. Style lexicons are extracted using the log-odds ratio informative Dirichlet prior We use a bidirectional LSTM to build our decoders which generate the sequence of tokens The sequence x is conditioned on the latent code z (in our case, on the machine translation model). In this work we use a corpus translated to French by the machine translation system as the input to the encoder of the backtranslation model. The same encoder is used to encode sentences of both styles. The representation created by this encoder is given by Eq 1. Samples are generated as follows: where, x<t are the tokens generated before xt . Tokens are discrete and non-differentiable. This makes it difficult to use a classifier, as the generation process samples discrete tokens from the multinomial distribution parametrized using softmax function at each time step t. This nondifferentiability, in turn, breaks down gradient propagation from the discriminators to the generator. Instead, following where, o t is the output of the generator and τ is the temperature which decreases as the training proceeds. Let θ G denote the parameters of the generators. Then the reconstruction loss is calculated using the cross entropy function, given by: Here, the back-translation encoder E creates the latent code z by: The generative loss L gen is then given by: where L recon is given by Eq. ( We also use global attention of where h t is the current target state and hs are all source states. While generating sentences, we use the attention vector to replace unknown characters (UNK) using the copy mechanism in Much work in computational social science has shown that people's personal and demographic characteristics-either publicly observable (e.g., age, gender) or private (e.g., religion, political affiliation)-are revealed in their linguistic choices Moreover, prior work has shown that the quality of language identification and POS tagging degrades significantly on African American Vernacular English We thus focus on two tasks that have practical and social-good applications, and also accurate style classifiers. To position our method with respect to prior work, we employ a third task of sentiment transfer, which was used in two stateof-the-art approaches to style transfer Gender. In sociolinguistics, gender is known to be one of the most important social categories driving language choice We used Reddy and Knight's (2016) dataset of reviews from Yelp annotated for two genders corresponding to markers of sex. Sentiment. To compare our work with the stateof-the-art approaches of style transfer for nonparallel corpus we perform sentiment transfer, replicating the models and experimental setups of Dataset statistics. We summarize below corpora statistics for the three tasks: transferring gender, political slant, and sentiment. The dataset for sentiment modification task was used as described in In what follows, we describe our experimental settings, including baselines used, hyperparameter settings, datasets, and evaluation setups. Baseline. We compare our model against the "cross-aligned" auto-encoder Translation data. We trained an English-French neural machine translation system and a French-English back-translation system. We used data from Workshop in Statistical Machine Translation 2015 (WMT15) Hyperparameter settings. In all the experiments, the generator and the encoders are a twolayer bidirectional LSTM with an input size of 300 and the hidden dimension of 500. The generator samples a sentence of maximum length 50. All the generators use global attention vectors of size 500. The CNN classifier is trained with 100 filters of size 5, with max-pooling. The input to CNN is of size 302: the 300-dimensional word embedding plus two bits for membership of the word in our style lexicons, as described in §2.2.1. Balancing parameter λ c is set to 15. For sentiment task, we have used settings provided in We evaluate our approach along three dimensions. (1) Style transfer accuracy, measuring the proportion of our models' outputs that generate sentences of the desired style. The style transfer accuracy is performed using classifiers trained on held-out train data that were not used in training the style transfer models. (2) Preservation of meaning. (3) Fluency, measuring the readability and the naturalness of the generated sentences. We conducted human evaluations for the latter two. In what follows, we first present the quality of our neural machine translation systems, then we present the evaluation setups, and then present the results of our experiments. Translation quality. The BLEU scores achieved for English-French MT system is 32.52 and for French-English MT system is 31.11; these are strong translation systems. We deliberately chose a European language close to English for which massive amounts of parallel data are available and translation quality is high, to concentrate on the style generation, rather than improving a translation system. We measure the accuracy of style transfer for the generated sentences using a pre-trained style classifier ( §2.2.1). The classifier is trained on data that is not used for training our style transfer generative models (as described in §3). The classifier has an accuracy of 82% for the gender-annotated corpus, 92% accuracy for the political slant dataset and 93.23% accuracy for the sentiment dataset. We transfer the style of test sentences and then test the classification accuracy of the generated sentences for the opposite label. For example, if we want to transfer the style of male Yelp reviews to female, then we use the fixed common encoder of the back-translation model to encode the test male sentences and then we use the female generative model to generate the female-styled reviews. We then test these generated sentences for the female label using the gender classifier. In Table On two out of three tasks our model substantially outperforms the baseline, by up to 12% in political slant transfer, by up to 7% in sentiment modification. Although we attempted to use automatics measures to evaluate how well meaning is preserved in our transformations; measures such as BLEU Meaning preservation in style transfer is not trivial to define as literal meaning is likely to change when style transfer occurs. For example "My girlfriend loved the desserts" vs "My partner liked the desserts". Thus we must relax the condition of literal meaning to intent or affect of the utterance within the context of the discourse. Thus if the intent is to criticize a restaurant's service in a review, changing "salad" to "chicken" could still have the same effect but if the intent is to order food that substitution would not be acceptable. downstream task and ensure that the task has the same outcome even after style transfer. This is a hard evaluation and hence we resort to a simpler evaluation of the "meaning" of the sentence. We set up a manual pairwise comparison following We then count the preferences of the eleven participants, measuring the relative acceptance of the generated sentences. 7 A third option "=" was given to participants to mark no preference for either of the generated sentence. The "no preference" option includes choices both are equally bad and both are equally good. We conducted three tests one for each type of experiment -gender, political slant and sentiment. We also divided our annotation set into short (#tokens ≤ 15) and long (15 < #tokens ≤ 30) sentences for the gender and the political slant experiment. In each set we had 20 random samples for each type of style transfer. In total we had 100 sentences to be annotated. Note that we did not ask about appropriateness of the style transfer in this test, or fluency of outputs, only about meaning preservation. The results of human evaluation are presented in Table Although a no-preference option was chosen often-showing that state-ofthe-art systems are still not on par with hu-7 None of the human judges are authors of this paper man expectations-the BST models outperform the baselines in the gender and the political slant transfer tasks. Crucially, the BST models significantly outperform the CAE models when transferring style in longer and harder sentences. Annotators preferred the CAE model only for 12.5% of the long sentences, compared to 47.27% preference for the BST model. Finally, we evaluate the fluency of the generated sentences. Fluency was rated from 1 (unreadable) to 4 (perfect) as is described in The results shown in BST outperforms the baseline overall. It is interesting to note that BST generates significantly more fluent longer sentences than the baseline model. Since the average length of sentences was higher for the gender experiment, BST notably outperformed the baseline in this task, relatively to the sentiment task where the sentences are shorter. Examples of the original and style-transfered sentences generated by the baseline and our model are shown in the Supplementary Material. The loss function of the generators given in Eq. 5 includes two competing terms, one to improve meaning preservation and the other to improve the style transfer accuracy. In the task of sentiment modification, the BST model preserved meaning worse than the baseline, on the expense of being better at style transfer. We note, however, that the sentiment modification task is not particularly well-suited for evaluating style transfer: it is particularly hard (if not impossible) to disentangle the sentiment of a sentence from its proposi-tional content, and to modify sentiment while preserving meaning or intent. On the other hand, the style-transfer accuracy for gender is lower for BST model but the preservation of meaning is much better for the BST model, compared to CAE model and to "No preference" option. This means that the BST model does better job at closely representing the input sentence while taking a mild hit in the style transfer accuracy. Style transfer with non-parallel text corpus has become an active research area due to the recent advances in text generation tasks. Our work is also closely-related to a problem of paraphrase generation We propose a novel approach to the task of style transfer with non-parallel text. In the future work, we will also explore whether an enhanced back-translation by pivoting through several languages will learn better grounded latent meaning representations. In particular, it would be interesting to back-translate through multiple target languages with a single source language Measuring the separation of style from content is hard, even for humans. It depends on the task and the context of the utterance within its discourse. Ultimately we must evaluate our style transfer within some down-stream task where our style transfer has its intended use but we achieve the same task completion criteria. | 815 | 1,317 | 815 |
Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training | Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019a) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks. | Open-ended tasks such as dialogue reveal a number of issues with current neural text generation methods. In more strongly grounded tasks such as machine translation and image captioning, current encoder-decoder architectures provide strong performance, where mostly word-level decisions are often taken correctly by the model. However, critical failings are exposed in less constrained generation: reliance on repetitive copying and overuse of frequent words, and an inability to maintain logical coherence. The former shows the learning objective is faulty in that it cannot match simple statistics of the training data, while the latter touches more to the heart of artificial intelligence: Work done while at Facebook AI Research (FAIR). these models do not understand what they are saying. For example, Figure In this work, we show how the recently introduced unlikelihood objective We first generalize unlikelihood to a different domain: dialogue, where we measure statistics of the training distribution in terms of contextual copies, within-utterance repeats, and vocabulary usage. We then develop loss functions that control these statistics, providing improved metrics on several tasks. Secondly, we show how the same tools can be used to address deeper semantic issues in such models. By leveraging existing natural language inference (NLI) data Code and pre-trained models will be made available. † | Dialogue Generation Dialogue generation consists in predicting an utterance y = (y 1 , . . . , y |y| ) given a context x = {s 1 , . . . , s k , u 1 , . . . , u t } that consists of initial context sentences s 1:k (e.g., scenario, knowledge, personas, etc.) followed by dialogue history utterances u 1:t from speakers who take consecutive turns. Likelihood Training Given a dataset D = {(x (i) , y (i) )} derived from a collection of humanhuman interactions, the standard approach to generative training for dialogue tasks is maximum likelihood estimation (MLE), that minimizes: where x (i) is a gold context (dialogue history and initial context sentences) and y (i) is a gold nextutterance, and y (i) t is the t-th token of y (i) . Likelihood-based (greedy or beam) decoding applied after training a model with this objective yields sequences with statistics that do not match the original human training sequence distribution. To control for such distribution mismatches, we employ the unlikelihood loss The general form of the unlikelihood loss penalizes a set of tokens C t at each time-step, where C t ⊆ V is a subset of the vocabulary, and β(y c ) is a candidate-dependent scale that controls how much the candidate token should be penalized. The overall objective in unlikelihood training then consists of mixing the likelihood and unlikelihood losses, where α ∈ R is the mixing hyper-parameter. Likelihood tries to model the overall sequence probability distribution, while unlikelihood corrects for known biases. It does this via the set of negative candidates C t calculated at each step t, where we are free to select candidate generation functions depending on the biases to be mitigated. Likelihood pushes up the probability of a gold token y (i) t while unlikelihood pushes down the probability of negative candidate tokens y c ∈ C t . In In this paper, we demonstrate how unlikelihood can be used as a general framework by applying it to the dialogue domain. We show how varying the contexts x, targets y, candidates C and scaling β can be used to improve the coherence and language modeling quality of dialogue models. To do this, we now consider the different biases we wish to mitigate, and construct a specific unlikelihood loss for each in turn. We use the ConvAI2 persona-based dialogue To measure label repetition in a sequence y, we use the portion of duplicate n-grams: and report the metric averaged over the examples. Context repetition increases when the model 'copies' n-grams from the context. To quantify language modeling quality, we use standard perplexity and F1 metrics. We use the pre-trained model fine-tuned with MLE as the baseline, and compare it against the pre-trained model fine-tuned with copy and repetition unlikelihood ( §2.1). We evaluate the ability of vocabulary unlikelihood ( §2.2) to reduce the mismatch between model and human token distributions. We use the ConvAI2 dataset, where our baseline is again trained using maximum likelihood. Starting with the baseline model, we then fine-tune several models using vocab unlikelihood at logarithmically interpolated values of α ∈ [1, 1000]. We partition the vocabulary into 'frequent', 'medium', 'rare', and 'rarest' using the human unigram distribution computed with the ConvAI2 training set, corresponding to the sorted token sets whose cumulative mass accounts for the top 40%, the next 30%, the next 20% and the final 10% of usage, respectively. We evaluate a model by generating utterances given contexts from the Con-vAI2 validation set, and compute the fraction of tokens within each class. Results Figure Table Human Evaluation Finally, we perform a human evaluation using the ACUTE-EVAL framework We use the dialogue natural language inference (NLI) task of Two Utterance Generation Task We adapt the initial dialogue NLI dataset by using entailing and neutral training sentence pairs as plausible positive utterances, and contradicting pairs as negatives. That is, if a pair (s 1 , s 2 ) from Dialogue NLI has label E or N, the example We consider two types of entailment: entailing sentence pairs that appear together in a dialogue in the original Persona-Chat dataset and are therefore natural ('entailment'), and those that only entail via their triple relations ('triple-entailment'). The latter are more challenging, noisier targets. Evaluation is performed by measuring the test set perplexity over the four target label types, where contradictions should have relatively higher perplexity. We additionally evaluate a selection accuracy task, where for each test example there are two candidate responses: a positive and a negative (contradicting) statement. The candidate response with the lowest perplexity is considered to be the model's selection, and we measure the selection success rate. Evaluation is broken down by positive type (entailment, triple-entailment, neutral). Dataset statistics are given in Table Full Dialogue Task To evaluate in a more realistic setup that involves full dialogue rather than a single utterance, we take full Persona-Chat dialogues Our work provides new applications of unlikelihood training In terms of dialogue coherence, In all of our experiments we employ a large pre-trained seq2seq Transformer Evaluation results from all evaluated matchups are shown in Figure Generating consistent and coherent human-like dialogue is a core goal of natural language research. We studied several aspects that contribute to that goal, defined metrics to measure them, and proposed algorithms that improve them, mitigating some of the failings of maximum likelihood training, the current dominant approach. Our method defines objective functions under the umbrella of unlikelihood: during training, we wish to make inconsistent dialogue unlikely by lowering the probability of such events occurring. This makes generative models repeat themselves less, copy the context less, and use more rare words from the vocabulary -closer to matching human statistics. Further, utilizing supervised datasets with labeled coherent and incoherent utterances and applying unlikelihood yields measurably improved levels of coherence with respect to the aspect measured, in this case contradiction. Future work could apply this same technique with other supervised data, e.g. correcting causal or commonsense reasoning errors The experiments on repetition and copying in the main paper were carried out with greedy decoding for simplicity. In this section we show that similar results hold with beam decoding as well. Using a beam size of 5, we take the same 4 models from Table Description of ConvAI2 vocabulary setup We follow We first collected 252 model-human conversations with each of the models (MLE baseline, and weights for α of Unlikelihood, examples in 8). We then set up a pairwise-comparison using the software of Description of ELI5 repetition setup We follow | 964 | 1,409 | 964 |
Automatic Metric Validation for Grammatical Error Correction | Correction (GEC) is currently done by observing the correlation between human and metric-induced rankings. However, such correlation studies are costly, methodologically troublesome, and suffer from low inter-rater agreement. We propose MAEGE, an automatic methodology for GEC metric validation, that overcomes many of the difficulties with existing practices. Experiments with MAEGE shed a new light on metric quality, showing for example that the standard M 2 metric fares poorly on corpus-level ranking. Moreover, we use MAEGE to perform a detailed analysis of metric behavior, showing that correcting some types of errors is consistently penalized by existing metrics. | Much recent effort has been devoted to automatic evaluation, both within GEC Human rankings are often considered as ground truth in text-to-text generation, but using them reliably can be challenging. Other than the costs of compiling a sizable validation set, human rank-ings are known to yield poor inter-rater agreement in MT The main contribution of this paper is an automatic methodology for metric validation in GEC called MAEGE (Methodology for Automatic Evaluation of GEC Evaluation), which addresses these difficulties. MAEGE requires no human rankings, and instead uses a corpus with gold standard GEC annotation to generate lattices of corrections with similar meanings but varying degrees of grammaticality. For each such lattice, MAEGE generates a partial order of correction quality, a quality score for each correction, and the number and types of edits required to fully correct each. It then computes the correlation of the induced partial order with the metric-induced rankings. MAEGE addresses many of the problems with existing methodology: • Human rankings yield low inter-rater and intra-rater agreement ( §3). Indeed, • CHR uses system outputs to obtain human rankings, which may be misleading, as systems may share similar biases, thus neglecting to evaluate some types of valid corrections ( §7). MAEGE addresses this issue by systematically traversing an inclusive space of corrections. • The difficulty in handling ties is addressed by only evaluating correction pairs where one contains a sub-set of the errors of the other, and is therefore clearly better. • MAEGE uses established statistical tests for determining the significance of its results, thereby avoiding ad-hoc methodologies used in CHR to tackle potential biases in human rankings ( §5, §6). In experiments on the standard NUCLE test set In addition to measuring metric reliability, MAEGE can also be used to analyze the sensitivities of the metrics to corrections of different types, which to our knowledge is a novel contribution of this work. Specifically, we find that not only are valid edits of some error types better rewarded than others, but that correcting certain error types is consistently penalized by existing metrics (Section 7). The importance of interpretability and detail in evaluation practices (as opposed to just providing bottom-line figures), has also been stressed in MT evaluation (e.g., | We turn to presenting the metrics we experiment with. The standard practice in GEC evaluation is to define differences between the source and a correction (or a reference) as a set of edits BLEU. BLEU GLEU. GLEU iBLEU. iBLEU We set α = 0.8 as suggested by Sun and Zhou. F -Score computes the overlap of edits to the source in the reference, and in the output. As system edits can be constructed in multiple ways, the standard M 2 scorer Levenshtein Distance. We use the Levenshtein distance Correlation with human rankings (CHR) is the standard methodology for assessing the validity of GEC metrics. While informative, human rankings are costly to produce, present low inter-rater agreement (shown for MT evaluation in There are two existing sets of human rankings for GEC that were compiled concurrently: GJG15 by Another source of inconsistency in CHR is that the rankings are relative and sampled, so datasets rank different sets of outputs We conclude by proposing a practice for reporting CHR in future work. First, we combine both sets of human judgments to arrive at the statistically most powerful test. Second, we compute the metrics' corpus-level rankings according to the same subset of sentences used for human rankings. The current practice of allowing metrics to rank systems based on their output on the entire CoNLL test set (while human rankings are only collected for a sub-set thereof), may bias the results due to potential non-uniform system performance on the test set. We report CHR according to the proposed protocol in Table In the following sections we present MAEGE an alternative methodology to CHR, which uses human corrections to induce more reliable and scalable rankings to compare metrics against. We begin our presentation by detailing the method MAEGE 2 The difference between our results and previously reported ones is probably due to a recent update in GLEU to better tackles multiple references (1) 1 The Ois are the original sentences, directed edges represent an application of an edit and R (i) j is the j-th perfect correction of Oi (i.e., the perfect correction that result from applying all the edits of the j-th annotation of Oi). uses to generate source-correction pairs and a partial order between them. MAEGE operates by using a corpus with gold annotation, given as edits, to generate lattices of corrections, each defined by a sub-set of the edits. Within the lattice, every pair of sentences can be regarded as a potential source and a potential output. We create sentence chains, in an increasing order of quality, taking a source sentence and applying edits in some order one after the other (see Figure Formally, for each sentence s in the corpus and each annotation a, we have a set of typed edits edits(s, a) = {e s,a } of size n s,a . We call 2 edits(s,a) the corrections lattice, and denote it with E s,a . We call, s, the correction corresponding to ∅ the original. We define a partial order relation between x, y ∈ E s,a such that x < y if x ⊂ y. This order relation is assumed to be the gold standard ranking between the corrections. For our experiments, we use the NUCLE test data Social media makes our life patten so fast and leave us less time to think about our life. Social media make our life patten so fast and leave us less time to think about our life. Social media make our pace of life so fast and leave us less time to think about our life. left leave makes make life patten pace of life references, produced by Sentences which require no correction according to at least one of the two annotations are discarded. In 26 cases where two edit spans intersect in the same annotation (out of a total of about 40K edits), the edits are manually merged or split. We conduct a corpus-level analysis, namely testing the ability of metrics to determine which corpus of corrections is of better quality. In practice, this procedure is used to rank systems based on their outputs on the test corpus. In order to compile corpora corresponding to systems of different quality levels, we define sev-eral corpus models, each applying a different expected number of edits to the original. Models are denoted with the expected number of edits they apply to the original which is a positive number M ∈ R + . Given a corpus model M , we generate a corpus of corrections by traversing the original sentences, and for each sentence s uniformly sample an annotation a (i.e., a set of edits that results in a perfect correction), and the number of edits applied n edits , which is sampled from a clipped binomial probability with mean M and variance 0.9. Given n edits , we uniformly sample from the lattice E s,a a sub-set of edits of size n edits , and apply this set of edits to s. The corpus of M = 0 is the set of originals. The corpus of source sentences, against which all other corpora are compared, is sampled by traversing the original sentences, and for each sentence s, uniformly sample an annotation a, and given s, a, uniformly sample a sentence from E s,a . Given a metric m ∈ METRICS, we compute its score for each sampled corpus. Where corpuslevel scores are not defined by the metrics themselves, we use the average sentence score instead. We compare the rankings induced by the scores of m and the ranking of systems according to their corpus model (i.e., systems that have a higher M should be ranked higher), and report the correlation between these rankings. Setup. We sample chains using the same sampling method as in §6, and uniformly sample a source from each chain. For each edit type t, we detect all pairs of corrections in the sampled chains that only differ in an edit of type t, and use them to compute ∆ m,t . We use the set of 27 edit types given in the NUCLE corpus. Results. Table In general, the tendency of reference-based metrics (the vast majority of GEC metrics) to penalize edits of various types suggests that many edit types are under-represented in available reference sets. Automatic evaluation of systems that perform these edit types may, therefore, be unreliable. Moreover, not addressing these biases in the metrics may hinder progress in GEC. Indeed, M 2 and GLEU, two of the most commonly used metrics, only award a small sub-set of edit types, thus offering no incentive for systems to improve performance on such types. 6 We proceed by presenting a method for assessing the correlation between metric-induced scores of corrections of the same sentence, and the scores given to these corrections by MAEGE. Given a sentence s and an annotation a, we sample a random permutation over the edits in edits(s, a). We denote the permutation with σ ∈ S ns,a , where S ns,a is the permutation group over {1, • • • , n s,a }. Given σ, we define a monotonic chain in E i,j as: s,a } < . . . < edits(s, a) For each chain, we uniformly sample one of its elements, mark it as the source, and denote it with src. In order to generate a set of chains, MAEGE traverses the original sentences and annotations, and for each sentence-annotation pair, uniformly samples n ch chains without repetition. It then uniformly samples a source sentence from each chain. If the number of chains in E s,a is smaller than n ch , MAEGE selects all the chains. Given a metric m ∈ METRICS, we compute its score for every correction in each sampled chain against the sampled source and available references. We compute the sentence-level correlation of the rankings induced by the scores of m and the rankings induced by <. For computing rank correlation (such as Spearman ρ or Kendall τ ), such a relative ranking is sufficient. We report Kendall τ , which is only sensitive to the relative ranking of correction pairs within the same chain. Kendall is minimalistic in its assumptions, as it does not require numerical scores, but only assuming that < is well-motivated, i.e., that applying a set of valid edits is better in quality than applying only a subset of it. As < is a partial order, and as Kendall τ is standardly defined over total orders, some modification is required. τ is a function of the number of compared pairs and of discongruent pairs (ordered differently in the compared rankings): To compute these quantities, we extract all unique pairs of corrections that can be compared with < (i.e., one applies a sub-set of the edits of the other), and count the number of discongruent ones between the metric's ranking and <. Significance is modified accordingly. 5 Spearman ρ is less applicable in this setting, as it compares total orders whereas here we compare partial orders. To compute linear correlation with Pearson r, we make the simplifying assumption that all edits contribute equally to the overall quality. Specifically, we assume that a perfect correction (i.e., the top of a chain) receives a score of 1. Each original sentence s (the bottom of a chain), for which there exists annotations a 1 , . . . , a n , receives a score of The scores of partial (non-perfect) corrections in each chain are linearly spaced between the score of the perfect correction and that of the original. This scoring system is well-defined, as a partial correction receives the same score according to all chains it is in, as all paths between a partial correction and the original have the same length. We revisit the argument that using system outputs to perform metric validation poses a methodological difficulty. Indeed, as GEC systems are developed, trained and tested using available metrics, and as metrics tend to reward some correction types and penalize others ( §7), it is possible that GEC development adjusts to the metrics, and neglects some error types. Resulting tendencies in GEC systems would then yield biased sets of outputs for human rankings, which in turn would result in biases in the validation process. To make this concrete, GEC systems are often precision-oriented: trained to prefer not to correct than to invalidly correct. Indeed, Choshen and 6 LDS→O tends to award valid corrections of almost all types. As source sentences are randomized across chains, this indicates that on average, corrections with more applied edits tend to be more similar to comparable corrections on the lattice. This is also reflected by the slightly positive sentencelevel correlation of LDS→O ( §6). We use MAEGE to mimic a setting of ranking against precision-oriented outputs. To do so, we perform corpus-level and sentence-level analyses, but instead of randomly sampling a source, we invariably take the original sentence as the source. We thereby create a setting where all edits applied are valid (but not all valid edits are applied). Comparing the results to the regular MAEGE correlation (Table Drawbacks. Like any methodology MAEGE has its simplifying assumptions and drawbacks; we wish to make them explicit. First, any biases introduced in the generation of the test corpus are inherited by MAEGE (e.g., that edits are contiguous and independent of each other). Second, MAEGE does not include errors that a human will not perform but machines might, e.g., significantly altering the meaning of the source. This partially explains why LT, which measures grammaticality but not meaning preservation, excels in our experiments. Third, MAEGE's scoring system ( §6) assumes that all errors damage the score equally. While this assumption is made by GEC metrics, we believe it should be refined in future work by collecting user information. In this paper, we show how to leverage existing annotation in GEC for performing validation reliably. We propose a new automatic methodology, MAEGE, which overcomes many of the shortcomings of the existing methodology. Experiments with MAEGE reveal a different picture of metric quality than previously reported. Our analysis suggests that differences in observed metric quality are partly due to system outputs sharing consistent tendencies, notably their tendency to under-predict corrections. As existing methodology ranks system outputs, these shared tendencies bias the validation process. The difficulties in basing validation on system outputs may be applicable to other text-to-text generation tasks, a question we will explore in future work. | 672 | 2,406 | 672 |
Robust Hate Speech Detection via Mitigating Spurious Correlations | We develop a novel robust hate speech detection model that can defend against both wordand character-level adversarial attacks. We identify the essential factor that vanilla detection models are vulnerable to adversarial attacks is the spurious correlation between certain target words in the text and the prediction label. To mitigate such spurious correlation, we describe the process of hate speech detection by a causal graph. Then, we employ the causal strength to quantify the spurious correlation and formulate a regularized entropy loss function. We show that our method generalizes the backdoor adjustment technique in causal inference. Finally, the empirical evaluation shows the efficacy of our method. 1 , | Online social media bring people together and encourage people to share their thoughts freely. However, it also allows some users to misuse the platforms to promote the hateful language. As a result, hate speech, which "expresses hate or encourages violence towards a person or group based on characteristics such as race, religion, sex, or sexual orientation" Research on defending against adversarial attacks in the text domain has been received significant attention in recent years In this paper, we develop a novel robust hate speech detection model. We target the situation where a group of target words could be replaced with any words even with entire different semantic meanings. We identify the essential factor to defend such attacks as to capture the causation between the semantic meaning of input text and the label and remove the spurious correlation between them. To this end, we use causal graphs | A hate speech detection model can be defined as a functional mapping from T to Y , where t ∈ T is a set of input texts and y ∈ Y is the target label set. In general, the output of the detection model is the softmax probability of predicting each class k, i.e., f k (t; θ) = P (Y = y k |t), where θ is the parameters of the model. We presume a given group of target words (usually hateful or sentiment words) denoted by H, and use X to indicate the remaining text excluding the words in H, i.e., T = ⟨X, H⟩. Adversarial examples are inputs to detection models with perturbations on H that purposely cause the model make mistakes. Causal graphs are widely used for representing causal relationships among variables We propose a causal graph for modeling the hate speech detection shown in Fig. Based on the causal graph, we identify one major reason that vanilla detection models are not robust to adversarial attacks: the detection models make predictions based on both the semantic meanings of texts and the spurious correlation between X and Y via H (i.e., X ← I → H → Y ) that significantly relates to the occurrence of the target words. When the target works, like the f-word, are strongly correlated with the hate label in the training dataset, the model trained on such data may easily make predictions based on the occurrence of the target words without considering the meanings of entire texts. Therefore, once the adversarial attacks that remove such correlations are conducted, the detection model is easy to be fooled. In order to make the detection model robust to any perturbation, one needs to prevent the model from learning the spurious correlation. To this end, we propose to penalize the causal influence of H on Y during the training so that the spurious correlation can be blocked. Inferring causal influences of input on predictions is a challenging task in machine learning. In this paper, we advocate the use of the causal strength proposed in where the second equality is due to factorization. Since the causal strength measures the influence of the word substitution, our problem becomes to penalize the causal strength in the training. In order to integrate the causal strength into the objective function, we rewrite Eq. ( C H→Y = x,h,y P (x, h, y) log P (y|x, h) x,h,y P (x, h, y) log For the first term of Eq. ( where N is the number of text in the data, j indicates the j-th text, and k is the class index. We similarly reformulate the second term of Eq. ( Finally, by adding the causal strength as a regularization term into the cross-entropy loss, we obtain the regularized cross-entropy loss as follows. where λ ∈ [0, 1] is the coefficient for balancing the model utility and the model robustness. We further analyze the meaning of the term L I in Eq. ( (5) Comparing Eqs. ( In Eq. ( We consider five baselines in the experiments: the base BERT and HateBERT To evaluate the robustness of all models, we use three different versions of the test dataset: the clean version, the word-level attack version where each word from the texts present in the list L is randomly replaced by one of the words in L, and the character-level attack version where each word in L is replaced by a misspelled version. Our model uses the pre-trained BERT as the base model which is then fine-tuned by minimizing Eq. (4) on our training data. By default λ = 0.5. The prior probability P (h ′ ) for a target word h ′ is calculated by dividing the total occurrence of h ′ in the training data by the total occurrence of all the words in L in the training data. We refer to our Robust Hate Speech Detection. We first evaluate the performance of all models on three test datasets in terms of accuracy, precision, recall and F1 scores of the positive (i.e., hate) class as well as the Macro F1. The mean and standard deviation of five runs are shown in Table We developed a robust hate speech detection model by leveraging the causal inference to mitigate spurious correlations. The experiment results show that our model can achieve better performance under both word-and character-level attacks compared with other baselines. | 717 | 913 | 717 |
Named Entity Recognition with Character-Level Models | We discuss two named-entity recognition models which use characters and character ¤ -grams either exclusively or as an important part of their data representation. The first model is a character-level HMM with minimal context information, and the second model is a maximum-entropy conditional markov model with substantially richer context features. Our best model achieves an overall F¥ of 86.07% on the English test data (92.31% on the development data). This number represents a 25% error reduction over the same model without word-internal (substring) features. | For most sequence-modeling tasks with word-level evaluation, including named-entity recognition and part-ofspeech tagging, it has seemed natural to use entire words as the basic input features. For example, the classic HMM view of these two tasks is one in which the observations are words and the hidden states encode class labels. However, because of data sparsity, sophisticated unknown word models are generally required for good performance. A common approach is to extract word-internal features from unknown words, for example suffix, capitalization, or punctuation features Here, we examine the utility of taking character sequences as a primary representation. We present two models in which the basic units are characters and character ¤ -grams, instead of words and word phrases. Ear- lier papers have taken a character-level approach to named entity recognition (NER), notably | Figure When using character-level models for word-evaluated tasks, one would not want multiple characters inside a single word to receive different labels. This can be avoided in two ways: by explicitly locking state transitions inside words, or by careful choice of transition topology. In our current implementation, we do the latter. Each state is a pair where is an entity type (such as PERSON, and including an other type) and indicates the length of time the system has been in state . There- fore, a state like (PERSON, 2) indicates the second letter inside a person phrase. The final letter of a phrase is a following space (we insert one if there is none) and the state is a special final state like (PERSON, F). Additionally, once reaches our ¤ -gram history order, it stays there. We then use empirical, unsmoothed estimates for state- state transitions. This annotation and estimation enforces consistent labellings in practice. For example, (PERSON, 2) can only transition to the next state (PERSON, 3) or the final state (PERSON, F). Final states can only transition to beginning states, like (other, 1). For emissions, we must estimate a quantity of the form 10 32 54 76 98 3@ BA . Given this model, we can do Viterbi decoding in the standard way. To be clear on what this model does and does not capture, we consider a few examples ( indicates a space). First, we might be asked for ¢ GF ¥ H I$ P F RQ S UT V8 7W YX . In this case, we know both that we are in the middle of a location that begins with Denv and also that the preceding context was to. In essence, encoding into the state lets us distinguish the begin- nings of phrases, which lets us model trends like named entities (all the classes besides other) generally starting with capital letters in English. Second, we may be asked for quantities like ¢ ¥ a`H ( b Rc UT V8 7W Yd , which allows us to model the ends of phrases. Here we have a slight complexity: by the notation, one would expect such emissions to have probability 1, since nothing else can be emitted from a final state. In practice, we have a special stop symbol in our n-gram counts, and the probability of emitting a space from a final state is the probability of the n-gram having chosen the stop character. We did also try to incorporate gazetteer information by adding ¤ -gram counts from gazetteer entries to the train- ing counts that back the above character emission model. However, this reduced performance (by 2.0% with context on). The supplied gazetteers appear to have been built from the training data and so do not increase coverage, and provide only a flat distribution of name phrases whose empirical distributions are very spiked. Given the amount of improvement from using a model backed by character ¤ -grams instead of word ¤ -grams, the immediate question is whether this benefit is complementary to the benefit from features which have traditionally been of use in word level systems, such as syntactic context features, topic features, and so on. To test this, we constructed a maxent classifier which locally classifies single words, without modeling the entity type sequences ¡ . In order to include state sequence features, which allow the classifications at various positions to interact, we have to abandon classifying each position independently. Sequence-sensitive features can be included by chaining our local classifiers together and performing joint inference, i.e., by building a conditional markov model (CMM), also known as a maximum entropy markov model The remaining improvements involved a number of other features which directly targetted observed error types. These features included letter type pattern features (for example 20-month would become d-x for digitlowercase and Italy would become Xx for mixed case). This improved performance substantially, for example allowing the system to detect ALL CAPS regions. Table The primary argument of this paper is that character substrings are a valuable, and, we believe, underexploited source of model features. In an HMM with an admittedly very local sequence model, switching from a word model to a character model gave an error reduction of about 30%. In the final, much richer chained maxent setting, the reduction from the best model minus ¤ -gram features to the reported best model was about 25% -smaller, but still substantial. This paper also again demonstrates how the ease of incorporating features into a discriminative maxent model allows for productive feature engineering. | 565 | 888 | 565 |
A Formal Hierarchy of RNN Architectures | We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN's memory, and rational recurrence, defined as whether the recurrent update can be described by a weighted finite-state machine. We place several RNN variants within this hierarchy. For example, we prove the LSTM is not rational, which formally separates it from the related QRNN (Bradbury et al., 2016). We also show how these models' expressive capacity is expanded by stacking multiple layers or composing them with different pooling functions. Our results build on the theory of "saturated" RNNs | While neural networks are central to the performance of today's strongest NLP systems, theoretical understanding of the formal properties of different kinds of networks is still limited. It is established, for example, that the Elman (1990) RNN is Turing-complete, given infinite precision and computation time Recently, In a separate line of work, We compare the expressive power of rational and non-rational RNNs, distinguishing between state expressiveness (what kind and amount of information the RNN states can capture) and language expressiveness (what languages can be recognized when the state is passed to a classifier). To do this, we build on the theory of saturated RNNs. | We introduce a unified hierarchy (Figure We provide the first formal proof that LSTMs can encode functions that rational recurrences cannot. On the other hand, we show that the saturated Elman RNN and GRU are rational recurrences with constant space complexity, whereas the QRNN has unbounded space complexity. We also show that an unrestricted WFA has rich expressive power beyond any saturated RNN we consider-including the LSTM. This difference potentially opens the door to more expressive RNNs incorporating the computational efficiency of rational recurrences. Language expressiveness When applied to classification tasks like language recognition, RNNs are typically combined with a "decoder": additional layer(s) that map their hidden states to a prediction. Thus, despite differences in state expressiveness, rational RNNs might be able to achieve comparable empirical performance to non-rational RNNs on NLP tasks. In this work, we consider the setup in which the decoders only view the final hidden state of the RNN. Experiments Finally, we conduct experiments on formal languages, justifying that our theorems correctly predict which languages unsaturated recognizers trained by gradient descent can learn. Thus, we view our hierarchy as a useful formal tool for understanding the relative capabilities of different RNN architectures. Roadmap We present the formal devices for our analysis of RNNs in Section 2. In Section 3 we develop our hierarchy of state expressiveness for single-layer RNNs. In Section 4, we shift to study RNNs as language recognizers. Finally, in Section 5, we provide empirical results evaluating the relevance of our predictions for unsaturated RNNs. In this work, we analyze RNNs using formal models from automata theory-in particular, WFAs and counter automata. In this section, we first define the basic notion of an encoder studied in this paper, and then introduce more specialized formal concepts: WFAs, counter machines (CMs), space complexity, and, finally, various RNN architectures. We view both RNNs and automata as encoders: machines that can be parameterized to compute a set of functions f : Σ * → Q k , where Σ is an input alphabet and Q is the set of rational reals. Given an encoder M and parameters θ, we use M θ to represent the specific function that the parameterized encoder computes. For each encoder, we refer to the set of functions that it can compute as its state expressiveness. For example, a deterministic finite state acceptor (DFA) is an encoder whose parameters are its transition graph. Its state expressiveness is the indicator functions for the regular languages. Formally, a WFA is a non-deterministic finite automaton where each starting state, transition, and final state is weighted. Let Q denote the set of states, Σ the alphabet, and Q the rational reals. 1. Initial state weights λ Final state weights ρ : Q → Q The weights are used to encode any string x ∈ Σ * : Definition 1 (Path score). Let π be a path of the form q The score of π is given by By Π(x), denote the set of paths producing x. Definition 2 (String encoding). The encoding computed by a WFA A on string x is Hankel matrix Given a function f : Σ * → Q and two enumerations α, ω of the strings in Σ * , we define the Hankel matrix of f as the infinite matrix where or refer to a sub-block of a Hankel matrix, row-and columnindexed by prefixes and suffixes P, S ⊆ Σ * . The following result relates the Hankel matrix to WFAs: Theorem 1 For any f : Σ * → Q, there exists a WFA that computes f if and only if H f has finite rank. Rational series We now turn to introducing a different type of encoder: the real-time counter machine (CM; Definition 3 (General CM; A CM processes input tokens {x t } n t=1 sequentially. Denoting q t , c t ∈ Q × Z k a CM's configuration at time t, define its next configuration: (2) where 1 =0 is a broadcasted "zero-check" operation, i.e., 1 =0 (v) i 1 =0 (v i ). In ( 1. A CM is Σ-restricted iff u and δ depend only on the current input σ ∈ Σ. 2. A CM is (Σ × Q)-restricted iff u and δ depend only on the current input σ ∈ Σ and the current state q ∈ Q. restricted, and the states Q are windows over the last w input tokens, e.g., Q = Σ ≤w . The memory introduced by the stack data structure pushes the encoder into Θ(n) space. We formalize this by showing that, like a WFA, the stack RNN can encode binary strings to their value. Lemma 5. The saturated stack RNN can compute the converging binary encoding function, i.e., 101 → 1 • 1 + 0.5 • 0 + 0.25 • 1 = 1.25. A saturated neural network is a discrete approximation of neural network considered by where N θ denotes the parameters θ multiplied by a scalar N . This transforms each "squashing" function (sigmoid, tanh, etc.) to its extreme values (0, ±1). In line with prior work A recurrent neural network (RNN) is a parameterized update function g θ : The recurrent update function g can take several forms. The original and most simple form is that of the Elman RNN. Since then, more elaborate forms using gating mechanisms have become popular, among them the LSTM, GRU, and QRNN. Elman RNNs (Elman, 1990) Let x t be a vector embedding of x t . For brevity, we suppress the bias terms in this (and the following) affine operations. (5) We refer to the saturated Elman RNN as the s-RNN. The s-RNN has Θ(1) space LSTMs The LSTM can use its memory vector c t as a register of counters GRUs (15) where z t , f t , o t are respectively rows of Z, F, O. A QRNN Q can be seen as an LSTM in which all uses of the state vector h t have been replaced with a computation over the last w input tokens-in this way it is similar to a CNN. The s-QRNN has Θ(log n) space, as the analysis of We now turn to presenting our results. In this section, we develop a hierarchy of single-layer RNNs based on their state expressiveness. A set-theoretic view of the hierarchy is shown in Figure Let R be the set of rational series. The hierarchy relates Θ(log n) space to the following sets: • RR As in • RR-hard An encoder is RR-hard iff its state expressiveness contains R. A Turing machine is RR-hard, as it can simulate any WFA. • RR-complete Finally, an encoder is RRcomplete iff its state expressiveness is equivalent to R. A trivial example of an RRcomplete encoder is a vector of k WFAs. The different RNNs are divided between the intersections of these classes. In Subsection 3.1, we prove that the s-LSTM, already established to have Θ(log n) space, is not RR. In Subsection 3.2, we demonstrate that encoders with restricted counting ability (e.g., QRNNs) are RR, and in Subsection 3.3, we show the same for all encoders with finite state (CNNs, s-RNNs, and s-GRUs). In Subsection 3.4, we demonstrate that none of these RNNs are RR-hard. In Appendix F, we extend this analysis from RNNs to self attention. We find that encoders like the s-LSTM-which, as discussed in Subsection 2.3, is "aware" of its current counter values-are not RR. To do this, we construct f 0 : {a, b} * → N that requires counter awareness to compute on strings of the form a * b * , making it not rational. We then construct an s-LSTM computing f 0 over a * b * . Let # a-b (x) denote the number of as in string x minus the number of bs. Definition 5 (Rectified counting). Therefore rank(A n ) = n-1. Thus, for all n, there is a sub-block of H f with rank n -1, and so rank(H f ) is unbounded. It follows from Theorem 1 that there is no WFA computing f . Theorem 2. The s-LSTM is not RR. Let σ/±m denote a transition that consumes σ and updates the counter by ±m. We write σ, =0/±m (or =) for a transition that requires the counter is 0. Proof. Assume the input has the form a i b j for some i, j. Consider the following LSTM 8 : For a string a i b j , the update in ( While the counter awareness of a general CM enables it to compute non-rational functions, CMs that cannot view their counters are RR. Theorem 3. Any Σ-restricted CM is RR. Proof. We show that any function that a Σrestricted CM can compute can also be computed by a collection of WFAs. The CM update operations (-1, +0, +1, or ×0) can all be reexpressed in terms of functions r(x), u(x) : Σ * → Z k to get: A WFA computing [c t ] i is shown in Figure The WFA in Figure In many rational RNNs, the updates at different time steps are independent of each other outside of a window of w tokens. Theorem 4 tells us this independence is not an essential property of rational encoders. Rather, any CM where the update is conditioned by finite state (as opposed to being conditioned by a local window) is in fact RR. Furthermore, since (Σ w )-restricted CMs are a special case of (Σ × Q)-restricted CMs, Theorem 4 can be directly applied to show that the s-QRNN is RR. See Appendix A for further discussion of this. Theorem 4 motivates us to also think about finitespace encoders: i.e., encoders with no counters" where the output at each prefix is fully determined by a finite amount of memory. The following lemma implies that any finite-space encoder is RR: Proof. Since f is computable in Θ(1) space, there exists a DFA A f whose accepting states are isomorphic to the range of f . We convert A f to a WFA by labelling each accepting state by the value of f that it corresponds to. We set the starting weight of the initial state to 1, and 0 for every other state. We assign each transition weight 1. Since the CNN, s-RNN, and s-GRU have finite state, we obtain the following result: Theorem 5. The CNN, s-RNN, and s-GRU are RR. While While "rational recurrence" is often used to indicate the simplicity of an RNN architecture, we find in this section that WFAs are surprisingly computationally powerful. Figure Figure Figure Theorem 6. Both the saturated and unsaturated RNN, GRU, QRNN, and LSTM 9 are not RR-hard. Proof. Consider the function f b mapping binary strings to their value, e.g. 101 → 5. The WFA in Figure In contrast, memory networks can have Θ(n) space. Appendix G explores this for stack RNNs. Appendix F presents preliminary results extending saturation analysis to self attention. We show saturated self attention is not RR and consider its space complexity. We hope further work will more completely characterize saturated self attention. Having explored the set of functions expressible internally by different saturated RNN encoders, we turn to the languages recognizable when using them with a decoder. We consider the following setup: 1. An s-RNN encodes x to a vector h t ∈ Q k . 2. A decoder function maps the last state h t to an accept/reject decision, respectively: {1, 0}. 9 As well as CMs. We say that a language L is decided by an encoder-decoder pair e, d if d(e(x)) = 1 for every sequence x ∈ L and otherwise d(e(x)) = 0. We explore which languages can be decided by different encoder-decoder pairings. Some related results can be found in Let d 1 be the single-layer linear decoder parameterized by w and b. For an encoder architecture E, we denote by D 1 (E) the set of languages decidable by E with d 1 . We use D 2 (E) analogously for a 2-layer decoder with 1 >0 activations, where the first layer has arbitrary width. We refer to sets of strings using regular expressions, e.g. a * = {a i | i ∈ N}. To illustrate the purpose of the decoder, consider the following language: The Hankel sub-block of the indicator function for L ≤ over P = a * , S = b * is lower triangular. Therefore, no RR encoder can compute it. However, adding the D 1 decoder allows us to compute this indicator function with an s-QRNN, which is RR. We set the s-QRNN layer to compute the simple series c t = # a-b (x) (by increasing on a and decreasing on b). The D 1 layer then checks c t ≤ 0. So, while the indicator function for L ≤ is not itself rational, it can be easily recovered from a rational representation. Thus, L ≤ ∈ D 1 (s-QRNN). We compare the language expressiveness of several rational and non-rational RNNs on the following: a n b n is more interesting than L ≤ because the D 1 decoder cannot decide it simply by asking the encoder to track # a-b (x), as that would require it to compute the non-linearly separable =0 function. Thus, it appears at first that deciding a n b n with D 1 might require a non-rational RNN encoder. However, we show below that this is not the case. Let • denote stacking two layers. We will go on to discuss the following results: , and show that H f has finite rank. It follows that there exists a WFA that can decide a n b n with the D 1 decoder. Counterintuitively, a n b n can be recognized using rational encoders. QRNNs (Appendix C) Although a n b n ∈ D 1 (WFA), it does not follow that every rationally recurrent model can also decide a n b n with the help of D 1 . Indeed, in Theorem 9, we prove that a n b n / ∈ D 1 (s-QRNN), whereas a n b n ∈ D 1 (s-LSTM) (Theorem 13). It is important to note that, with a more complex decoder, the QRNN could recognize a n b n . For example, the s-QRNN can encode c 1 = # a-b (x) and set c 2 to check whether x contains ba, from which a D 2 decoder can recognize a n b n (Theorem 10). This does not mean the hierarchy dissolves as the decoder is strengthened. We show that a n b n Σ *which seems like a trivial extension of a n b n -is not recognizable by the s-QRNN with any decoder. This result may appear counterintuitive, but in fact highlights the s-QRNN's lack of counter awareness: it can only passively encode the information needed by the decoder to recognize a n b n . Failing to recognize that a valid prefix has been matched, it cannot act to preserve that information after additional input tokens are seen. We present a proof in Theorem 11. In contrast, in Theorem 14 we show that the s-LSTM can directly encode an indicator for a n b n Σ * in its internal state. Proof sketch: a n b n Σ * / ∈ D(s-QRNN). A sequence s 1 ∈ a n b n Σ * is shuffled to create s 2 / ∈ a n b n Σ * with an identical multi-set of counter up-dates. We refer to this technique as the suffix attack, and note that it can be used to prove for multiple other languages L ∈ D 2 (s-QRNN) that L•Σ * is not in D(s-QRNN) for any decoder D. 2-layer QRNNs Adding another layer overcomes the weakness of the 1-layer s-QRNN, at least for deciding a n b n . This follows from the fact that a n b n ∈ D 2 (s-QRNN): the second QRNN layer can be used as a linear layer. Similarly, we show in Theorem 10 that a 2-layer s-QRNN can recognize a n b n Σ * ∪ { }. This suggests that adding a second s-QRNN layer compensates for some of the weakness of the 1-layer s-QRNN, which, by the same argument for a n b n Σ * cannot recognize a n b n Σ * ∪ { } with any decoder. Finally, we study the theoretical case where the decoder is an arbitrary recursively enumerable (RE) function. We view this as a loose upper bound of stacking many layers after a rational encoder. What information is inherently lost by using a rational encoder? WFAs can uniquely encode each input, making them Turing-complete under this setup; however, this does not hold for rational s-RNNs. Assuming an RR-complete encoder, a WFA like Figure Bounded space However, the Θ(log n) space bound of saturated rational RNNs like the s-QRNN means these models cannot fully encode the input. In other words, some information about the prefix x :t must be lost in c t . Thus, rational s-RNNs are not Turing-complete with an RE decoder. In Subsection 4.3, we showed that different saturated RNNs vary in their ability to recognize a n b n and a n b n Σ * . We now test empirically whether these predictions carry over to the learnable capacity of unsaturated RNNs. 11 We compare the QRNN and LSTM when coupled with a linear decoder D 1 . We also train a 2-layer QRNN ("QRNN2") and a 1-layer QRNN with a D 2 decoder ("QRNN+"). We train on strings of length 64, and evaluate generalization on longer strings. We also compare to a baseline that always predicts the majority class. The results are shown in Figure Experiment 1 We use the following language, which has similar formal properties to a n b n , but with a more balanced label distribution: In line with (34), the LSTM decides L 5 perfectly for n ≤ 64, and generalizes fairly well to longer strings. As predicted in (35), the QRNN cannot fully learn L 5 even for n = 64. Finally, as predicted in ( We develop a hierarchy of saturated RNN encoders, considering two angles: space complexity and rational recurrence. Based on the hierarchy, we formally distinguish the state expressiveness of the non-rational s-LSTM and its rational counterpart, the s-QRNN. We show further distinctions in state expressiveness based on encoder space complexity. Moreover, the hierarchy translates to differences in language recognition capabilities. Strengthening the decoder alleviates some, but not all, of these differences. We present two languages, both recognizable by an LSTM. We show that one can be recognized by an s-QRNN only with the help of a decoder, and that the other cannot be recognized by an s-QRNN with the help of any decoder. While this means existing rational RNNs are fundamentally limited compared to LSTMs, we find that it is not necessarily being rationally recurrent that limits them: in fact, we prove that a WFA can perfectly encode its input-something no saturated RNN can do. We conclude with an analysis that shows that an RNN architecture's strength must also take into account its space complexity. These results further our understanding of the inner working of NLP systems. We hope they will guide the development of more expressive rational RNNs. We extend the result in Theorem 3 as follows. Theorem 7. Any (Σ × Q)-restricted CM is rationally recurrent. Proof. We present an algorithm to construct a WFA computing an arbitrary counter in a (Σ × Q)restricted CM. First, we create two independent copies of the transition graph for the restricted CM. We refer to one copy of the CM graph as the add graph, and the other as the multiply graph. The initial state in the add graph receives a starting weight of 1, and every other state receives a starting weight of 0. Each state in the add graph receives an accepting weight of 0, and each state in the multiply graph receives an accepting weight of 1. In the add graph, each transition receives a weight of 1. In the multiply graph, each transition receives a weight of 0 if it represents ×0, and 1 otherwise. Finally, for each non-multiplicative update σ/+m Each counter update creates one path ending in the multiply graph. The path score is set to 0 if that counter update is "erased" by a ×0 operation. Thus, the sum of all the path scores in the WFA equals the value of the counter. This construction can be extended to accommodate =m counter updates from q i to q j by adding an additional transition from the initial state to q j in the multiplication graph with weight m. This allows us to apply it directly to s-QRNNs, whose update operations include =1 and =-1. We show that while WFAs cannot directly encode an indicator for the language a n b n = {a n b n | | n ∈ N}, they can encode a function that can be thresholded to recognize a n b n , i.e.: We prove this by showing a function whose Hankel matrix has finite rank that, when combined with the identity transformation (i.e., w = 1, b = 0) followed by thresholding, is an indicator for a n b n . Using the shorthand σ(x) = # σ (x), the function is: a n b n . To prove that its Hankel matrix, H f , has finite rank, we will create 3 infinite matrices of ranks 3, 3 and 1, which sum to H f . The majority of the proof will focus on the rank of the rank 3 matrices, which have similar compositions. We now show 3 series r, s, t and a set of series they can be combined to create. These series will be used to create the base vectors for the rank 3 matrices. where for every j ≤ 2, Lemma 3. Let c i = 1 -2i 2 and {c (k) } k∈N be the set of series defined c Proof. For i ∈ {0, 1, 2}, r i , s i and t i collapse to a 'select' operation, giving the true statement c Substituting the series definitions in the right side of the equation gives which can be expanded to Proof. An ifo s-QRNN can be expressed as a Σ krestricted CM with the additional update operations {:= -1, := 1}, where k is the window size of the QRNN. So it is sufficient to show that such a machine, when coupled with the decoder D 1 (linear translation followed by thresholding), cannot recognize a n b n . Let A be some such CM, with window size k and h counters. Take n = k + 10 and for every m ∈ N denote w m = a n b m and the counter values of A after w m as c m ∈ Q h . Denote by u t the vector of counter update operations made by this machine on input sequence w m at time t ≤ n + m. As A is dependent only on the last k counters, necessarily all u k+i are identical for every i ≥ 1. It follows that for all counters in the machine that go through an assignment (i.e., :=) operation in u k+1 , their values in c k+i are identical for every i ≥ 1, and for every other counter j, c k+i j -c k j = i • δ for some δ ∈ Z. Formally: for every i ≥ 1 there are two sets We now consider the linear thresholder, defined by weights and bias w, b. In order to recognise a n b n , the thresholder must satisfy: Opening these equations gives: However, this does not mean that the s-QRNN is entirely incapable of recognising a n b n . Increasing the decoder power allows it to recognise a n b n quite simply: Theorem 10. For the two-layer decoder D 2 , a n b n ∈ D 2 (s-QRNN). Proof. Let # ba (x) denote the number of ba 2grams in x. We use s-QRNN with window size 2 to maintain two counters: [c t ] 2 can be computed provided the QRNN window size is ≥ 2. A two-layer decoder can then check Theorem 11 (Suffix attack). No s-QRNN and decoder can recognize the language a n b n Σ * = a n b n (a|b) * , n > 0, i.e., a n b n Σ * / ∈ L(s-QRNN) for any decoder L. The proof will rely on the s-QRNN's inability to "freeze" a computed value, protecting it from manipulation by future input. Proof. As in the proof for Theorem 9, it is sufficient to show that no Σ k -restricted CM with the additional operations {:=-1, :=1} can recognize a n b n Σ * for any decoder L. Let A be some such CM, with window size k and h counters. For every w ∈ Σ n denote by c(w) ∈ Q h the counter values of A after processing w. Denote by u t the vector of counter update operations made by this machine on an input sequence w at time t ≤ |w|. Recall that A is Σ k restricted, meaning that u i depends exactly on the window of the last k tokens for every i. We now denote j = k + 10 and consider the sequences w 1 = a j b j a j b j a j b j , w 2 = a j b j-1 a j b j+1 a j b j . w 2 is obtained from w 1 by removing the 2j-th token of w 1 and reinserting it at position 4j. As all of w 1 is composed of blocks of ≥ k identical tokens, the windows preceding all of the other tokens in w 1 are unaffected by the removal of the 2j-th token. Similarly, being added onto the end of a substring b k , its insertion does not affect the windows of the tokens after it, nor is its own window different from before. This means that overall, the set of all operations u i performed on the counters is identical in w 1 and in w 2 . The only difference is in their ordering. w 1 and w 2 begin with a shared prefix a k , and so necessarily the counters are identical after processing it. We now consider the updates to the counters after these first k tokens, these are determined by the windows of k tokens preceding each update. First, consider all the counters that undergo some assignment (:=) operation during these sequences, and denote by {w} the multiset of windows in w ∈ Σ k for which they are reset. w 1 and w 2 only contain k-windows of types a x b k-x or b x a k-x , and so these must all re-appear in the shared suffix b j a j b j of w 1 and w 2 , at which point they will be synchronised. It follows that these counters all finish with identical value in c(w 1 ) and c(w 2 ). All the other counters are only updated using addition of -1, 1 and 0, and so the order of the updates is inconsequential. It follows that they too are identical in c(w 1 ) and c(w 2 ), and therefore necessarily that c(w 1 ) = c(w 2 ). From this we have w 1 , w 2 satisfying w 1 ∈ a n b n Σ * , w 2 / ∈ a n b n Σ * but also c(w 1 ) = c(w 2 ). Therefore, it is not possible to distinguish between w 1 and w 2 with the help of any decoder, despite the fact that w 1 ∈ a n b n Σ * and w 2 / ∈ a n b n Σ * . It follows that the CM and s-QRNN cannot recognize a n b n Σ * with any decoder. For the opposite extension Σ * a n b n , in which the language is augmented by a prefix, we cannot use such a "suffix attack". In fact, Σ * a n b n can be recognized by an s-QRNN with window length w ≥ 2 and a linear threshold decoder as follows: a counter counts # a-b (x) and is reset to 1 on appearances of ba, and the decoder compares it to 0. Note that we define decoders as functions from the final state to the output. Thus, adding an additional QRNN layer does not count as a "decoder" (as it reads multiple states). In fact, we show that having two QRNN layers allows recognizing a n b n Σ * . Theorem 12. Let be the empty string. Then, Proof. We construct a two-layer s-QRNN from which a n b n Σ * can be recognized. Let $ denote the left edge of the string. The first layer computes two quantities d t and e t as follows: Note that e t can be interpreted as a binary value checking whether the first token was b. The second layer computes c t as a function of d t , e t , and x t (which can be passed through the first layer). We will demonstrate a construction for c t by creating linearly separable functions for the gate terms f t and z t that update c t . Now, the update function u t to c t can be expressed -1 otherwise. (71) Finally, the decoder accepts iff c t ≤ 0. To justify this, we consider two cases: either x starts with b or a. If x starts with b, then e t = 0, so we increment c t by 1 and never decrement it. Since 0 < c t for any t, we will reject x. If x starts with a, then we accept iff there exists a sequence of bs following the prefix of as such that both sequences have the same length. In contrast to the s-QRNN, we show that the s-LSTM paired with a simple linear and thresholding decoder can recognize both a n b n and a n b n Σ * . Theorem 13. Proof. Assuming a string a i b i , we set two units of the LSTM state to compute the following functions using the CM in Figure We also add a third unit [c t ] 3 that tracks whether the 2-gram ba has been encountered, which is equivalent to verifying that the string has the form a i b i . Allowing h t = tanh(c t ), we set the linear threshold layer to check Theorem 14. Proof. We use the same construction as Theorem 13, augmenting it with We decide x according to the (still linearly separable) equation Models were trained on strings up to length 64, and, at each index t, were asked to classify whether or not the prefix up to t was a valid string in the language. Models were then tested on independent datasets of lengths 64, 128, 256, 512, 1024, and 2048. The training dataset contained 100000 strings, and the validation and test datasets contained 10000. We discuss task-specific schemes for sampling strings in the next paragraph. All models were trained for a maximum of 100 epochs, with early stopping after 10 epochs based on the validation cross entropy loss. We used default hyperparameters provided by the open-source AllenNLP framework Sampling strings For the language L 5 , each token was sampled uniformly at random from Σ = {a, b}. For a n b n Σ * , half the strings were sampled in this way, and for the other half, we sampled n uniformly between 0 and 32, fixing the first 2n characters of the string to a n b n and sampling the suffix uniformly at random. Experiments were run for 20 GPU hours on Quadro RTX 8000. Architecture We place saturated self attention 1. At time t, compute queries q t , keys k t , and values v t from the input embedding x t using a linear transformation. 2. Compute attention head h t by attending over the keys and values up to time t (K :t and V :t ) with query q t . 3. Let • L denote a layer normalization operation This simplified architecture has only one attention head, and does not incorporate residual connections. It is also masked (i.e., at time t, can only see the prefix X :t ), which enables direct comparison with unidirectional RNNs. For simplicity, we do not add positional information to the input embeddings. Theorem 15. Saturated masked self attention is not RR. Proof. Let # σ (x) denote the number of occurences of σ ∈ Σ in string x. We construct a self attention layer to compute the following function over {a, b} * : Since the Hankel sub-block over P = a * , S = b * has infinite rank, f ∈ R. Fix v t = x t . As shown by For all t, set the key and query k t , q t = 1. Thus, all the key-query similarities are 1, and we obtain: Applying layer norm to this quantity preserves equality of the first and second elements. Thus, we set the layer in (77) to independently check 0 < [h 0 t ] 1 -[h 0 t ] 2 and [h 0 t ] 1 -[h 0 t ] 2 < 0 using ReLU. The final layer c t sums these two quantities, returning 0 if neither condition is met, and 1 otherwise. Since saturated self attention can represent f / ∈ R, it is not RR. Space Complexity We show that self attention falls into the same space complexity class as the LSTM and QRNN. Our method here extends Merrill (2019)'s analysis of attention. Theorem 16. Saturated single-layer self attention has Θ(log n) space. Proof. The construction from Theorem 15 can reach a linear (in sequence length) number of different outputs, implying a linear number of different configurations, and so that the space complexity of saturated self attention is Ω(log n). We now show the upper bound O(log n). A sufficient representation for the internal state (configuration) of a self-attention layer is the unordered group of key-value pairs over the prefixes of the input sequence. Since f k : x t → k t and f v : x t → v t have finite domain (Σ), their images K = image(f k ), V = image(f v ) are finite. Note that this construction does not apply if the "vocabulary" we are attending over is not finite. Thus, using unbounded positional embeddings, stacking multiple self attention layers, or applying attention over other encodings with unbounded state might reach Θ(n). While it eludes our current focus, we hope future work will extend the saturated analysis to self attention more completely. We direct the reader to All of the standard RNN architectures considered in Section 3 have O(log n) space in their saturated form. In this section, we consider a stack RNN encoder similar to the one proposed by Classically, a stack is a dynamic list of objects to which elements v ∈ V can be added and removed in a LIFO manner (using push and pop operations). The stack RNN proposed in Differentiable Stack In a differentiable stack, the update operation takes an element s t to push and a distribution π t over the update operations push, pop, and no-op, and returns the weighted average of the result of applying each to the current stack. The averaging is done elementwise along the stacks, beginning from the top entry. To facilitate this, differentiable stacks are padded with infinite 'null entries'. Their elements must also have a weighted average operation defined. Definition 6 (Geometric k-stack RNN encoder). Initialize the stack S to an infinite list of null entries, and denote by S t the stack value at time t. Using 1-indexing for the stack and denoting [S t-1 ] 0 s t , the geometric k-stack RNN recurrent update is: In this work we will consider the case where the null entries are 0 and the encoding c t is produced as a geometric-weighted sum of the stack contents, This encoding gives preference to the latest values in the stack, giving initial stack encoding c 0 = 0. | 667 | 683 | 667 |
A Simple and Effective Approach to Coverage-Aware Neural Machine Translation | We offer a simple and effective method to seek a better balance between model confidence and length preference for Neural Machine Translation (NMT). Unlike the popular length normalization and coverage models, our model does not require training nor reranking the limited n-best outputs. Moreover, it is robust to large beam sizes, which is not well studied in previous work. On the Chinese-English and English-German translation tasks, our approach yields +0.4 ∼ 1.5 BLEU improvements over the state-of-the-art baselines. | In the past few years, Neural Machine Translation (NMT) has achieved state-of-the-art performance in many translation tasks. It models the translation problem using neural networks with no assumption of the hidden structures between two languages, and learns the model parameters from bilingual texts in an end-to-end fashion where x and y are the source and target sentences, and P(y j |y <j , x) is the probability of generating the j-th word y j given the previously-generated words y <j and the source sentence x. However, the straightforward implementation of this model suffers from many problems, the most obvious one being the bias that the system tends to choose shorter translations because the log-probability is added over time steps. The situation is worse when we use beam search where the shorter translations have more chances to beat the longer ones. It is in general to normalize the model score by translation length (say length normalization) to eliminate this system bias Though widely used, length normalization is not a perfect solution. NMT systems still have under-translation and over-translation problem even with a normalized model. It is due to the lack of the coverage model that indicates the degree a source word is translated. As an extreme case, a source word might be translated for several times, which results in many duplicated target words. Several research groups have proposed solutions to this bad case In this paper we present a simple and effective approach by introducing a coverage-based feature into NMT. Unlike previous studies, we do not resort to developing extra models nor reranking the limited n-best translations. Instead, we develop a coverage score and apply it to each decoding step. Our approach has several benefits, • Our approach does not require to train a huge neural network and is easy to implement. Figure We test our approach on the NIST Chinese-English and WMT English-German translation tasks, and it outperforms several state-of-the-art baselines by 0.4∼1.5 BLEU points. | Given a word sequence, a coverage vector indicates whether the word of each position is translated. This is trivial for statistical machine translation However, it is not the case for NMT where the coverage is modeled in a soft way. In NMT, no explicit translation units or rules are used. The attention mechanism is used instead to model the correspondence between a source position and a target position Here, we present a coverage score (CS) to describe to what extent the source words are translated. In principle, the coverage score should be high if the translation covers most words in source sentence, and low if it covers only a few of them. Given a source position i, we define its coverage as the sum of the past attention probabilities c i = |y| j a ij where β is a parameter that can be tuned on a development set. This model has two properties: • Non-linearity Eq. ( • Truncation At the early stage of decoding, the coverage of the most source words is close to 0. This may result in a negative infinity value after the logarithm function, and discard hypotheses with sharp attention distributions, which is not necessarily bad. The truncation with the lowest value β can ensure that the coverage score has a reasonable value. Here β is similar to model warm-up, which makes the model easy to run in the first few decoding steps. Note that our way of truncation is different from For decoding, we incorporate the coverage score into beam search via linear combination with the NMT model score as below, where y is a partial translation generated during decoding, log P(y|x) is the model score, and α is the coefficient for linear interpolation. In standard implementation of NMT systems, once a hypothesis is finished, it is removed from the beam and the beam shrinks accordingly. Here we choose a different decoding strategy. We keep the finished hypotheses in the beam until the decoding completes, which means that we compare the finished hypotheses with partial translations at each step. This method helps because it can dynamically determine whether a finished hypothesis is kept in beam through the entire decoding process, and thus reduce search errors. It enables the decoder to throw away finished hypotheses if they have very low coverage but are of high likelihood values. We evaluated our approach on Chinese-English and German-English translation tasks. We used 1.8M sentence Chinese-English bitext provided within NIST12 OpenMT 2 and 4.5M sentence German-English bitext provided within WMT16. For Chinese-English translation, we chose the evaluation data of NIST MT06 as the development set, and MT08 as the test set. All Chinese sentences were word segmented using the tool provided within NiuTrans Our baseline systems were based on the opensource implementation of the NMT model presented in For comparison, we re-implemented the length normalization (LN) and coverage penalty (CP) methods Table We also compared CP with our method by ap- Table Then, Figure Another interesting question is whether the N-MT systems can generate translations with appropriate lengths. To seek its answer, we studied the length difference between the MT output and the shortest reference. Table Sensitivity analysis on α and β in Table The length preference and coverage problems have been discussed for years since the rise of statistical machine translation Perhaps the most related work to this paper is To address this issue, we remove the probability constraint and make the coverage score interpretable for different cases. Another difference lies in that our coverage model is applied to every beam search step, while Previous work have pointed out that BLEU scores of NMT systems drop as beam size increases We have described a coverage score and integrated it into a state-of-the-art NMT system. Our method is easy to implement and does not need training for additional models. Also, it performs well in searching with large beam sizes. On Chinese-English and English-German translation tasks, it outperforms several baselines significantly. | 522 | 2,040 | 522 |
Deep-speare: A joint neural model of poetic language, meter and rhyme | In this paper, we propose a joint architecture that captures language, rhyme and meter for sonnet modelling. We assess the quality of generated poems using crowd and expert judgements. The stress and rhyme models perform very well, as generated poems are largely indistinguishable from human-written poems. Expert evaluation, however, reveals that a vanilla language model captures meter implicitly, and that machine-generated poems still underperform in terms of readability and emotion. Our research shows the importance expert evaluation for poetry generation, and that future research should look beyond rhyme/meter and focus on poetic language. | With the recent surge of interest in deep learning, one question that is being asked across a number of fronts is: can deep learning techniques be harnessed for creative purposes? Creative applications where such research exists include the composition of music (Humphrey et al., 2013; Sturm et al., 2016; Choi et al., 2016), the design of sculptures (Lehman et al., 2016), and automatic choreography (Crnkovic-Friis and Crnkovic-Friis, 2016). In this paper, we focus on a creative textual task: automatic poetry composition. A distinguishing feature of poetry is its aesthetic forms, e.g. rhyme and rhythm/meter. Shall I compare thee to a summer's day? Thou art more lovely and more temperate: Rough winds do shake the darling buds of May, And summer's lease hath all too short a date: of stresses. Specifically, we focus on sonnets and generate quatrains in iambic pentameter (e.g. see Figure Our findings are as follows: • our proposed stress and rhyme models work very well, generating sonnet quatrains with stress and rhyme patterns that are indistinguishable from human-written poems and rated highly by an expert; • a vanilla language model trained over our sonnet corpus, surprisingly, captures meter implicitly at human-level performance; • while crowd workers rate the poems generated by our best model as nearly indistinguishable from published poems by humans, an expert annotator found the machine-generated poems to lack readability and emotion, and our best model to be only comparable to a vanilla language model on these dimensions; • most work on poetry generation focuses on meter (Greene et al., 2010; Ghazvininejad et al., 2016; Hopkins and Kiela, 2017); our results suggest that future research should look beyond meter and focus on improving readability. In this, we develop a new annotation framework for the evaluation of machine-generated poems, and release both a novel data of sonnets and the full source code associated with this research. | Early poetry generation systems were generally rule-based, and based on rhyming/TTS dictionaries and syllable counting (Gervás, 2000; Wu et al., 2009; Netzer et al., 2009; Colton et al., 2012; Toivanen et al., 2013). The earliest attempt at using statistical modelling for poetry generation was Greene et al. (2010), based on a language model paired with a stress model. Neural networks have dominated recent research. Zhang and Lapata (2014) use a combination of convolutional and recurrent networks for modelling Chinese poetry, which Wang et al. (2016) later simplified by incorporating an attention mechanism and training at the character level. For English poetry, Ghazvininejad et al. (2016) introduced a finite-state acceptor to explicitly model rhythm in conjunction with a recurrent neural language model for generation. Hopkins and Kiela (2017) improve rhythm modelling with a cascade of weighted state transducers, and demonstrate the use of character-level language model for English poetry. A critical difference over our work is that we jointly model both poetry content and forms, and unlike previous work which use dictionaries (Ghazvininejad et al., 2016) or heuristics (Greene et al., 2010) for rhyme, we learn it automatically. The sonnet is a poem type popularised by Shakespeare, made up of 14 lines structured as 3 quatrains (4 lines) and a couplet (2 lines); A sonnet line obeys an alternating stress pattern, called the iambic pentameter, e.g.: S -S + S -S + S -S + S -S + S -S + Shall I compare thee to a summer's day? where S -and S + denote unstressed and stressed syllables, respectively. A sonnet also rhymes, with a typical rhyming scheme being ABAB CDCD EFEF GG. There are a number of variants, however, mostly seen in the quatrains; e.g. AABB or ABBA are also common. We build our sonnet dataset from the latest image of Project Gutenberg. Given the poems, we use word and character statistics derived from Shakespeare's 154 sonnets to filter out all non-sonnet poems (to form the "BACKGROUND" dataset), leaving the sonnet corpus ("SONNET"). We propose modelling both content and forms jointly with a neural architecture, composed of 3 components: (1) a language model; (2) a pentameter model for capturing iambic pentameter; and (3) a rhyme model for learning rhyming words. Given a sonnet line, the language model uses standard categorical cross-entropy to predict the next word, and the pentameter model is similarly trained to learn the alternating iambic stress patterns. We use standard perplexity for evaluating the language model. In terms of model variants, we have: Each number is an average across 10 runs. • LM * * : LSTM language model that incorporates both character encodings and preceding context; • LM * * -C: Similar to LM * * , but preceding context is encoded using convolutional networks, inspired by the poetry model of Zhang and Lapata (2014); 20 • LM * * +PM+RM: the full model, with joint training of the language, pentameter and rhyme models. Perplexity on the test partition is detailed in Table 2. Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM * * . The inferior performance of LM * * -C compared to LM * * demonstrates that our approach of processing context with recurrent networks with selective encoding is more effective than convolutional networks. The full model LM * * +PM+RM, which learns stress To assess the pentameter model, we use the attention weights to predict stress patterns for words in the test data, and compare them against stress patterns in the CMU pronunciation dictionary. To extract a stress pattern for a word from the model, we iterate through the pentameter (10 time steps), and append the appropriate stress (e.g. 1st time step = S -) to the word if any of its characters receives an attention 0.20. For the baseline (Stress-BL) we use the pretrained weighted finite state transducer (WFST) provided by Hopkins and Kiela (2017). We present stress accuracy in Table x-axis the characters of the sonnet line (punctuation removed). The attention network appears to perform very well, without any noticeable errors. The only minor exception is lovely in the second line, where it predicts 2 stresses but the second stress focuses incorrectly on the character e rather than y. Additional heatmaps for the full sonnet are provided in the supplementary material. We follow a similar approach to evaluate the rhyme model against the CMU dictionary, but score based on F1 score. Word pairs that are not included in the dictionary are discarded. Rhyme is determined by extracting the final stressed phoneme for the paired words, and testing if their phoneme patterns match. We predict rhyme for a word pair by feeding them to the rhyme model and computing cosine similarity; if a word pair is assigned a score 0.8, We focus on quatrain generation in this work, and so the aim is to generate 4 lines of poetry. During generation we feed the hidden state from the previous time step to the language model's decoder to compute the vocabulary distribution for the current time step. Words are sampled using a temperature between 0.6 and 0.8, and they are resampled if the following set of words is generated: (1) UNK token; (2) non-stopwords that were generated before; We next describe how to incorporate the pentameter model for generation. Given a sonnet line, the pentameter model computes a loss L pm (Equation (3)) that indicates how well the line conforms to the iambic pentameter. We first generate 10 candidate lines (all initialised with the same hidden state), and then sample one line from the candidate lines based on the pentameter loss values (L pm ). We convert the losses into probabilities by taking the softmax, and a sentence is sampled with temperature = 0.1. To enforce rhyme, we randomly select one of the rhyming schemes (AABB, ABAB or ABBA) and resample sentence-ending words as necessary. Given a pair of words, the rhyme model produces a cosine similarity score that estimates how well the two words rhyme. We resample the second word of a rhyming pair (e.g. when generating the second A in AABB) until it produces a cosine similarity 0.9. We also resample the second word of a nonrhyming pair (e.g. when generating the first B in AABB) by requiring a cosine similarity 0.7. We assess our sonnet model in two ways: (1) component evaluation of the language, pentameter and rhyme models; and (2) poetry generation evaluation, by crowd workers and an English literature expert. A sample of machine-generated sonnets are included in the supplementary material. We tune the hyper-parameters of the model over the development data (optimal configuration in the supplementary material). Word embeddings are initialised with pre-trained skip-gram embeddings (Mikolov et al., 2013a,b) on the BACKGROUND dataset, and are updated during training. For optimisers, we use Adagrad (Duchi et al., 2011) for the language model, and Adam (Kingma and Ba, 2014) for the pentameter and rhyme models. We truncate backpropagation through time after 2 sonnet lines, and train using 30 epochs, resetting the network weights to the weights from the previous epoch whenever development loss worsens. Following Hopkins and Kiela (2017), we present a pair of quatrains (one machine-generated and one human-written, in random order) to crowd workers on CrowdFlower, and ask them to guess which is the human-written poem. Generation quality is estimated by computing the accuracy of workers at correctly identifying the human-written poem (with lower values indicate better results for the model). We generate 50 quatrains each for LM, LM * * and LM * * +PM+RM (150 in total), and as a control, generate 30 quatrains with LM trained for one epoch. An equal number of human-written quatrains was sampled from the training partition. A HIT contained 5 pairs of poems (of which one is a control), and workers were paid $0.05 for each HIT. Workers who failed to identify the human-written poem in the control pair reliably (minimum accuracy = 70%) were removed by CrowdFlower automati- cally, and they were restricted to do a maximum of 3 HITs. To dissuade workers from using search engines to identify real poems, we presented the quatrains as images. Accuracy is presented in Table To better understand the qualitative aspects of our generated quatrains, we asked an English literature expert (a Professor of English literature at a major English-speaking university; the last author of this paper) to directly rate 4 aspects: meter, rhyme, readability and emotion (i.e. amount of emotion the poem evokes). All are rated on an ordinal scale between 1 to 5 (1 = worst; 5 = best). In total, 120 quatrains were annotated, 30 each for LM, LM * * , LM * * +PM+RM, and human-written poems (Human). The expert was blind to the source of each poem. The mean and standard deviation of the ratings are presented in Table We found that our full model has the highest ratings for both rhyme and meter, even higher than human poets. This might seem surprising, but in fact it is well established that real poets regularly break rules of form to create other effects (Adams, 1997). Despite excellent form, the output of our model can easily be distinguished from humanwritten poetry due to its lower emotional impact and readability. In particular, there is evidence here that our focus on form actually hurts the readability of the resulting poems, relative even to the simpler language models. Another surprise is how well simple language models do in terms of their grasp of meter: in this expert evaluation, we see only marginal benefit as we increase the sophistication of the model. Taken as a whole, this evaluation suggests that future research should look beyond forms, towards the substance of good poetry. We propose a joint model of language, meter and rhyme that captures language and form for modelling sonnets. We provide quantitative analyses for each component, and assess the quality of generated poems using judgements from crowdworkers and a literature expert. Our research reveals that vanilla LSTM language model captures meter implicitly, and our proposed rhyme model performs exceptionally well. Machine-generated generated poems, however, still underperform in terms of readability and emotion. | 649 | 1,971 | 649 |
Evolutionary Data Measures: Understanding the Difficulty of Text Classification Tasks | Classification tasks are usually analysed and improved through new model architectures or hyperparameter optimisation but the underlying properties of datasets are discovered on an ad-hoc basis as errors occur. However, understanding the properties of the data is crucial in perfecting models. In this paper we analyse exactly which characteristics of a dataset best determine how difficult that dataset is for the task of text classification. We then propose an intuitive measure of difficulty for text classification datasets which is simple and fast to calculate. We show that this measure generalises to unseen data by comparing it to stateof-the-art datasets and results. This measure can be used to analyse the precise source of errors in a dataset and allows fast estimation of how difficult a dataset is to learn. We searched for this measure by training 12 classical and neural network based models on 78 real-world datasets, then use a genetic algorithm to discover the best measure of difficulty. Our difficulty-calculating code 1 and datasets 2 are publicly available. | If a machine learning (ML) model is trained on a dataset then the same machine learning model on the same dataset but with more granular labels will frequently have lower performance scores than the original model (see results in Such a difficulty measure would be useful as an analysis tool and as a performance estimator. As an analysis tool, it would highlight precisely what is causing difficulty in a dataset, reducing the time practitioners need spend analysing their data. As a performance estimator, when practitioners approach new datasets they would be able to use this measure to predict how well models are likely to perform on the dataset. The complexity of datasets for ML has been previously examined | One source of difficulty in a dataset is mislabelled items of data (noise). Class Diversity. Class diversity provides information about the composition of a dataset by measuring the relative abundances of different classes Class Balance. Unbalanced classes are a known problem in machine learning Data Complexity. Humans find some pieces of text more difficult to comprehend than others. How difficult a piece of text is to read can be calculated automatically using measures such as those proposed by Mc Laughlin (1969); We used 78 text classification datasets and trained 12 different ML algorithms on each of the datasets for a total of 936 models trained. The highest achieved macro F1 score We wanted the discovered difficulty measure to be useful as an analysis tool, so we enforced a restriction that the difficulty measure should be composed only by summation, without weighting the constituent statistics. This meant that each difficulty measure could be used as an analysis tool by examining its components and comparing them to the mean across all datasets. Each difficulty measure was represented as a binary vector of length 48 -one bit for each statistic -each bit being 1 if that statistic was used in the difficulty measure. We therefore had 2 48 possible different difficulty measures that may have correlated with model score and needed to search this space efficiently. Genetic algorithms are biologically inspired search algorithms and are good at searching large spaces efficiently We gathered 27 real-world text classification datasets from public sources, summarised in Table We created 51 more datasets by taking two or more of the original 27 datasets and combining all of the data points from each into one dataset. The label for each data item was the name of the dataset which the text originally came from. We combined similar datasets in this way, for example two different datasets of tweets, so that the classes would not be trivially distinguishable -there is no dataset to classify text as either a tweet or Shakespeare for example as this would be too easy for models. The full list of combined datasets is in Appendix A.2. Our datasets focus on short text classification by limiting each data item to 100 words. We demonstrate that the difficulty measure we discover with this setup generalises to longer text classification in Section 3.1. All datasets were lowercase with no punctuation. For datasets with no validation set, 15% of the training set was randomly sampled as a validation set at runtime. We calculated 12 distinct statistics with different n-gram sizes to produce 48 statistics of each dataset. These statistics are designed to increase in value as difficulty increases. The 12 statistics are described here and a listing of the full 48 is in Appendix B in Table We recorded the Shannon Diversity Index and its normalised variant the Shannon Equitability (Shannon, 2001) using the count-based probability distribution of classes described above. We propose a simple measure of class imbalance: C is the total number of classes, n c is the count of items in class c and T DAT A is the total number of data points. This statistic is 0 if there are an equal number of data points in every class and the upper bound is 2 1 -1 C and is achieved when one class has all the data points -a proof is given in Appendix B.2. Per-class probability distributions were calculated by splitting the dataset into subsets based on the class of each data point and then computing countbased probability distributions as described above for each subset. Hellinger Similarity One minus both the average and minimum Hellinger Distance Top N-Gram Interference Average Jaccard similarity Mutual Information Average mutual information Distinct n-grams : Total n-grams Count of distinct n-grams in a dataset divided by the total number of n-grams. Score of 1 indicates that each ngram occurs once in the dataset. The Flesch Reading Ease (FRE) formula grades text from 100 to 0, 100 indicating most readable and 0 indicating difficult to read N-Gram and Character Diversity Using the Shannon Index and Equitability described by Shannon (2001) we calculate the diversity and equitability of n-grams and characters. Probability distributions are count-based as described at the start of this section. To ensure that any discovered measures did not depend on which model was used (i.e. that they were model agnostic), we trained 12 models on every dataset. The models are summarised in Table Word Embeddings Our neural network models excluding the Convolutional Neural Network (CNN) used 128-dimensional FastText Term Frequency Inverse Document Frequency (tf-idf) Our classical machine learning models represented each data item as a tf-idf vector Characters Our CNN, inspired by The genetic algorithm maintains a population of candidate difficulty measures, each being a binary vector of length 48 (see start of Method section). At each time step, it will evaluate each member of the population using a fitness function. It will then select pairs of parents based on their fitness, and perform crossover and mutation on each pair to produce a new child difficulty measure, which is added to the next population. This process is iterated until the fitness in the population no longer improves. Population The genetic algorithm is nonrandomly initialised with the 48 statistics described in Section 2.2 -each one is a difficulty measure composed of a single statistic. 400 pairs of parents are sampled with replacement from each population, so populations after this first time step will consist of 200 candidate measures. The probability of a measure being selected as a parent is proportional to its fitness. The fitness function of each difficulty measure is based on the Pearson correlation To produce a new difficulty measure from two parents, the constituent statistics of each parent are randomly intermingled, allowing each parent to pass on information about the search space. This is done in the following way: for each of the 48 statistics, one of the two parents is randomly selected and if the parent uses that statistic, the child also does. This produces a child which has features of both parents. To introduce more stochasticity to the process and ensure that the algorithm does not get trapped in a local minima of fitness, the child is mutated. Mutation is performed by randomly adding or taking away each of the 48 statistics with probability 0.01. After this process, the child difficulty measure is added to the new population. Training The process of calculating fitness, selecting parents and creating child difficulty measures is iterated until there has been no improvement in fitness for 15 generations. Due to the stochasticity in the process, we run the whole evolution 50 times. We run 11 different variants of this evolution, leaving out different statistics of the dataset each time to test which are most important in finding a good difficulty measure, in total running 550 evolutions. Training time is fast, averaging 79 seconds per evolution with a standard deviation of 25 seconds, determined over 50 runs of the algorithm on a single CPU. The four hypothesized areas of difficulty This measure is the shortest measure which achieves a higher correlation than the mean, at -0.8814. This measure is plotted against model F1 scores in Figure A difficulty measure is useful as an analysis and performance estimation tool if it is model agnostic and provides an accurate difficulty estimate on unseen datasets. When running the evolution, the F1 scores of our character-level CNN were not observed by the genetic algorithm. If the discovered difficulty measure still correlated with the CNN's scores despite never having seen them during evolution, it is more likely to be model agnostic. The CNN has a different model architecture to the other models and has a different input type which encodes no prior knowledge (as word embeddings do) or contextual information about the dataset (as tf-idf does). D1 has a correlation of -0.9010 with the CNN and D2 has a correlation of -0.8974 which suggests that both of our presented measures do not depend on what model was used. One of the limitations of our method was that our models never saw text that was longer than 100 words and were never trained on any very large datasets (i.e. >1 million data points). We also performed no hyperparameter optimisation and did not use state-of-the-art models. To test whether our measure generalises to large datasets with text longer than 100 words, we compared it to some recent state-of-the-art results in text classification using the eight datasets described by The Difficulty Measure Generalises to Very Large Datasets and Long Data Items. The smallest of the eight datasets described by The Difficulty Measure is Model and Input Type Agnostic. The state-of-the-art models presented in Table The Difficulty Measure Lacks Precision. The average score achieved on the Yahoo Answers dataset is 69.9% and its difficulty is 4.51. The average score achieved on Yelp Full is 56.8%, 13.1% less than Yahoo Answers and its difficulty is 4.42. In ML terms, a difference of 13% is significant yet our difficulty measure assigns a higher difficulty to the easier dataset. However, Yahoo Answers, Yelp Full and Amazon Full, the only three of Stanford Sentiment Treebank Binary Classification (SST 2) Figure An alternate solution would be to split reviews like this into two separate ones: one with the positive component and one with the negative. Furthermore, Figure To show that our analysis with this difficulty measure was accurately observing the difficulty in SST, we randomly sampled and analysed 100 misclassified data points from SST's test set out of 150 total misclassified. Of these 100, 48 were reviews with both strong positive and negative features and would be difficult for a human to classify, 22 were sarcastic and 8 were mislabelled. The remaining 22 could be easily classified by a human and are misclassified due to errors in the model rather than the data items themselves being difficult to interpret. These findings show that our difficulty measure correctly determined the source of difficulty in SST because 78% of the errors are implied by our difficulty measure and the remaining 22% are due to errors in the model itself, not difficulty in the dataset. We hypothesized that the difficulty of a dataset would be determined by four areas not including noise: Class Diversity, Class Balance, Class Interference and Text Complexity. We performed multiple runs of the genetic algorithm, leaving statistics out each time to test which were most important in finding a good difficulty measure which resulted in the following findings: No Single Characteristic Describes Difficulty When the Class Diversity statistic was left out of evolution, the highest achieved correlation was -0.806, 9% lower than D1 and D2. However, on its own Class Diversity had a correlation of -0.644 with model performance. Clearly, Class Diversity is necessary but not sufficient to estimate dataset difficulty. Furthermore, when all measures of Class Diversity and Balance were excluded, the highest achieved correlation was -0.733 and when all measures of Class Interference were excluded the best correlation was -0.727. These three expected areas of difficulty -Class Diversity, Balance and Interference -must all be measured to get an accurate estimate of difficulty because excluding any of them significantly damages the correlation that can be found. Correlations for each individual statistic are in Table Data Complexity Has Little Affect on Difficulty Excluding all measures of Data Complexity from evolution yielded an average correlation of -0.869, only 1% lower than the average when all statistics were included. Furthermore, the only measure of Data Complexity present in D1 and D2 is Distinct Words : Total Words which has a mean value of 0.067 and therefore contributes very little to the difficulty measure. This shows that while Data Complexity is necessary to achieve top correlation, its significance is minimal in comparison to the other areas of difficulty. When a dataset has a large number of balanced classes, then Class Diversity dominates the measure. This means that the difficulty measure is not a useful performance estimator for such datasets. To illustrate this, we created several fake datasets with 1000, 100, 50 and 25 classes. Each dataset had 1000 copies of the same randomly generated string in each class. It was easy for mod-els to overfit and score a 100% F1 score on these fake datasets. For the 1000-class fake data, Class Diversity is 6.91, which by our difficulty measure would indicate that the dataset is extremely difficult. However, all models easily achieve a 100% F1 score. By testing on these fake datasets, we found that the limit for the number of classes before Class Diversity dominates the difficulty measure and renders it inaccurate is approximately 25. Any datasets with more than 25 classes with an approximately equal number of items per class will be predicted as difficult regardless of whether they actually are because of this diversity measure. Datasets with more than 25 unbalanced classes are still measured accurately. For example, the ATIS dataset One of our datasets of New Year's Resolution Tweets has 115 classes but only 3507 data points Our genetic algorithm, based on an unweighted, linear sum, cannot take statistics like data size into account currently because they do not have a convenient range of values; the number of data points in a dataset can vary from several hundred to several million. However, the information is still useful to practitioners in diagnosing the difficulty of a dataset. Given that the difficulty measure lacks precision and may be better suited to classification than regression as discussed in Section 3.1, cannot take account of statistics without a convenient range of values and that the difficulty measure must be interpretable, we suggest that future work could look at combining statistics with a white-box, nonlinear algorithm like a decision tree. As opposed to summation, such a combination could take account of statistics with different value ranges and perform either classification or regression while remaining interpretable. Here we present some general guidelines on how the four areas of difficulty can be reduced. Class Diversity can only be sensibly reduced by lowering the number of classes, for example by grouping classes under superclasses. In academic settings where this is not possible, hierarchical learning allows grouping of classes but will produce granular labels at the lowest level Class Interference is influenced by the amount of noise in the data and linguistic phenomena like sarcasm. It can also be affected by the way the data is labelled, for example as shown in Section 3.2 where SST has data points with both positive and negative features but only a single label. Filtering noise, restructuring or relabelling ambiguous data points and detecting phenomena like sarcasm will help to reduce class interference. Easily confused classes can also be grouped under one superclass if practitioners are willing to sacrifice granularity to gain performance. Class Imbalance can be addressed with data augmentation such as thesaurus based methods Data Complexity can be managed with large amounts of data. This need not necessarily be labelled -unsupervised pre-training can help models understand the form of complex data before attempting to use it Model Selection Once the difficulty of a dataset has been calculated, a practitioner can use this to decide whether they need a complex or simple model to learn the data. Performance Checking and Prediction Practitioners will be able to compare the results their models get to the scores of other models on datasets of an equivalent difficulty. If their models achieve lower results than what is expected ac-cording to the difficulty measure, then this could indicate a problem with the model. When their models do not achieve good results, ML practitioners could potentially calculate thousands of statistics to see what aspects of their datasets are stopping their models from learning. Given this, how do practitioners tell which statistics are the most useful to calculate? Which ones will tell them the most? What changes could they make which will produce the biggest increase in model performance? In this work, we have presented two measures of text classification dataset difficulty which can be used as analysis tools and performance estimators. We have shown that these measures generalise to unseen datasets. Our recommended measure can be calculated simply by counting the words and labels of a dataset and is formed by adding five different, unweighted statistics together. As the difficulty measure is an unweighted sum, its components can be examined individually to analyse the sources of difficulty in a dataset. There are two main benefits to this difficulty measure. Firstly, it will reduce the time that practitioners need to spend analysing their data in order to improve model scores. As we have demonstrated which statistics are most indicative of dataset difficulty, practitioners need only calculate these to discover the sources of difficulty in their data. Secondly, the difficulty measure can be used as a performance estimator. When practitioners approach new tasks they need only calculate these simple statistics in order to estimate how well models are likely to perform. Furthermore, this work has shown that for text classification the areas of Class Diversity, Balance and Interference are essential to measure in order to understand difficulty. Data Complexity is also important, but to a lesser extent. Future work should firstly experiment with nonlinear but interpretable methods of combining statistics into a difficulty measure such as decision trees. Furthermore, it should apply this difficulty measure to other NLP tasks that may require deeper linguistic knowledge than text classification, such as named entity recognition and parsing. Such tasks may require more advanced features than simple word counts as were used in this work. | 1,080 | 715 | 1,080 |
Neural Readability Pairwise Ranking for Sentences in Italian Administrative Language | Automatic Readability Assessment aims at assigning a complexity level to a given text, which could help improve the accessibility to information in specific domains, such as the administrative one. In this paper, we investigate the behavior of a Neural Pairwise Ranking Model (NPRM) for sentence-level readability assessment of Italian administrative texts. To deal with data scarcity, we experiment with cross-lingual, cross-and in-domain approaches, and test our models on Admin-It, a new parallel corpus in the Italian administrative language, containing sentences simplified using three different rewriting strategies. We show that NPRMs are effective in zero-shot scenarios (∼0.78 ranking accuracy), especially with ranking pairs containing simplifications produced by overall rewriting at the sentence-level, and that the best results are obtained by adding indomain data (achieving perfect performance for such sentence pairs). Finally, we investigate where NPRMs failed, showing that the characteristics of the data used for fine-tuning, rather than its size, have a bigger effect on a model's performance. | Due to its complexity, the style of Italian administrative texts has been defined as "artificial" and "obscure" One way to tackle this problem is with technologies for Automatic Readability Assessment (ARA) that predict the complexity of texts In this paper, we tackle the data scarcity issue in two ways. First, we introduce Admin-It (Sec. 3), a parallel corpus in the Italian administrative language with sentences that were simplified following three different styles of rewriting. Then, we repurpose We evaluate the performance of NPRMs on Admin-It in zero-shot settings (Sec. 5), fine-tuning models with data from different languages (i.e., Italian, English and Spanish) and domains (i.e., administrative, educational, and news). We show that, overcoming the limitations of traditional ARA system in cross-domain set-ups Finally, we conduct a qualitative analysis on the errors made by NPRMs (Sec. 7), and observe how models deal with various kinds of simplification, such as overall rewriting versus the application of single operations of simplification (e.g., lexical substitution, splitting or deleting). To sum up, our main contributions are: • We create Admin-It, a parallel corpus of sentences for the Italian administrative language containing different simplification styles; 1 • We prove that the Neural Pairwise Ranking Model is also effective for automatic readability assessment of sentences; • We experiment with NPRMs in cross-domain and cross-lingual set-ups, analyzing their performances when fine-tuned with data of different languages and domains, and show that they reach good results in zero-shot scenarios; • We analyze the models' errors according to the styles of simplification applied in different subsections of Admin-It. While ARA is normally a document-level task, we tackle it at the sentence level due to the characteristics of the datasets available in Italian | Early ARA techniques consisted in the so-called "readability formulae". Such formulae were created for educational purposes and mainly considered shallow text features, like word and sentence length or lists of common words However, longer words and sentences are not necessarily complex, and these formulae have been proved to be unreliable NLP and Machine Learning fostered the emergence of "AI readability" systems Recently, Given the paucity of data in the Italian administrative language for sentence readability and simplification, we decided to build Admin-It, a parallel corpus of Italian administrative language. The corpus comprises 736 sentence pairs corresponding to two readability levels: original and simplified. We organized the corpus in three subsets according to the different nature of the applied simplification: Operations (Admin-It OP ): 588 pairs of sen-tences from the subsection of the Simpitiki corpus Rewritten Sents (Admin-It RS ): New 100 pairs of original-simplified sentences. The original sentences were selected from websites of Italian municipalities, Rewritten Docs (Admin-It RD ): 48 pairs of sentences selected from administrative texts, which were collected and simplified by In order to make Admin-It publicly available, we masked potentially sensitive data mentioned in the sentences, such as bank account numbers, addresses, licence numbers, phones and emails. Table 1 reports some quantitative information about the corpus. Admin-It RS has the highest average length of all subsets since, by design, it contains simplifications for very long sentences. Furthermore, both Admin-It RS and Admin-It RD register high Levenshtein distances since these two subsets were simplified through overall rewriting, whereas in Admin-It OP , one single simplification operation per sentence was applied. Examples of sentence pairs can be found in Appendix A (Table In this section, we briefly describe the Neural Pairwise Ranking Model (NPRM) of NPRM for Sentences. In our setting, the input text is sentences instead of documents. Even though the NPRM can rank an arbitrary number of texts in each list of tuples, due to the characteristics of our data, we rank sentences in only two readability levels: complex and simple. Therefore, the input is now a list of two tuples with the vector representations of the original (s o ) and simplified (s s ) versions of the same sentence, and their readabilities. That is No further changes were made to the original model. To validate our adaptation of the model, we examined the performance of the NPRM for ranking sentences in a monolingual setting for English. We fine-tuned it on the OSE corpus (see Sec. 5.1) via 5-Fold cross validation with bert-base-uncased. The resulted ranking accuracy was quite high (0.96) and close to the one obtained by We adapted the released code of We fine-tuned our models using data in three languages (English, Spanish and Italian) and three domains (news, administrative and educational). As a pre-processing step, for all datasets, we filtered out instances where the original and simplified sentences were identical. Simpitiki/Wikipedia (Simpitiki W ): Introduced in Tonelli et al. ( SimPA: This is an English sentence-level simplification corpus in the administrative domain Similarly to For what concerns Baseline L , we decided to focus on sentence length to mimic the behaviour of traditional readability formulae, and because it is a raw text feature that we could easily extract and compare between corpora of different languages. In addition, such baseline assigns a ranking even in cases of ties (see how we handled this in the evalu-ation step in Sec. 5.3). Finally, Baseline L models were trained following different combinations of data, similar to our NPRMs. With regards to Baseline E , the sentence embeddings are obtained from an Italian BERT model that we call BertIta Our models are evaluated in terms of Ranking Accuracy (RA), that is the percentage of pairs ranked correctly. We used the implementation provided by To assess if differences in scores between pairs of models are statistically significant, we used a nonparametric statistical hypothesis test, McNemar's Test We first fine-tuned our models with only Italian data, but not from the administrative domain. Our models were fine-tuned on Simpitiki W , with the NPRM exploiting BertIta. As shown in Table Replacing BertIta with mBERT, We now experiment with adding in-domain data to the previous setting, even if it is in another language. That is, models are now fine-tuned on OSE, NewsEn, NewsEs and SimPA. As shown in Table We proceed to fine-tune our models using out-ofdomain data (i.e., news) in other languages (i.e., English and Spanish). In particular, models are fine-tuned on OSE, NewsEn and NewsEs. Results are reported in Table Despite OSE being smaller than NewsEn and NewsEs, the NPRM fine-tuned on it reached better overall results than when fine-tuned on the other datasets. In particular, even if the differences are not significant, that NPRM achieved a higher RA in Admin-It OP and comparable scores in Admin-It RS . On the other hand, the NPRM fine-tuned on NewsEs obtained a sensible improvement in RA for Admin-It RD , even surpassing Baseline L , although not significantly. The best result for this subset (and on Admin-It overall) is obtained by combining OSE and NewsEs. Adding NewsEs could have helped because Spanish is more similar to Italian than English, both belonging to the same family of Romance languages and therefore sharing similar morphosyntactic structures Finally, combining all three datasets allowed an NPRM to obtain the best results in Admin-It OP and Admin-It RS in this setting. On both subsets, there are significant differences with both the baselines and the NPRMs fine-tuned only on Simpitiki W (p<0.001). When compared to SimPA and to the combination of SimPA and Simpitiki W , the significance is reached only on Admin-It OP (p<0.01). We also experimented with pairwise combinations of the three datasets without substantial improvements (see Appendix C for more scores of these experiments). We analyze where the NPRMs failed when ranking sentence pairs from Admin-It RD and Admin-It OP . We focus on these two subsets of Admin-It given the high results already obtained on Admin-It RS . NPRMs reached the highest RAs in this subset (0.896) when fine-tuned on OSE+NewsEs, OSE+NewsEs+Simpitiki W , or OSE+NewsEn+Simpitiki W . We analyze the errors made by the first model since it also achieved the highest RA (0.785) on the overall dataset among those models. This NPRM failed to rank five out of 48 sentence pairs in Admin-It RD . In some cases, given the same semantic content, punctuation could have affected the scoring because commas split the sentences in various parenthetical expressions (see the first example in Table 4). However, when a sentence contains terms, structures, or formulaic expressions typical of the Italian administrative language, the model ranks the pair correctly regardless of the punctuation, and even in the presence of a higher number of parenthetical expressions in the simplified sentence. In another case, a sentence was classified as complex when information was added to clarify some implicit information. As shown in the second example in Table [Please also inform this Office of the processing of your file by means of the enclosed form or by telephone (0001112), so that it is not held in abeyance.] Simplified: Per poter archiviare la pratica, chiediamo cortesemente di restituirci il modulo allegato, anche via fax, o di inviarci un messaggio di posta elettronica. [In order to be able to file the papers, we kindly ask you to return the attached form to us, also by fax, or send us an e-mail.] Original: L'Ufficio Anagrafe del Comune provvederà d'ufficio alle conseguenti variazioni nel registro della popolazione residente; alla messa in opera delle nuove targhe sull'edificio provvederanno direttamente gli Uffici comunali competenti. Si comunica inoltre che la suddetta variazione viene segnalata direttamente da questo ufficio ai seguenti enti: ENEL, SIT s.p.a. e Servizio Postale. [The Registry Office of the Municipality will provide ex officio for the consequent variations in the register of the resident population; the installation of the new plates on the building will be carried out directly by the competent municipal offices. Please also note that the above-mentioned variation will be notified directly by this office to the following entities: ENEL, SIT s.p.a. and Postal Service.] Simplified: Il Comune aggiornerà d'ufficio quanto di sua competenza (anagrafe, autorizzazioni, tributi, comunicazioni agli enti pensionistici ed all'Azienda Provinciale per i Servizi Sanitari), installerà la targhetta indicante il numero civico e comunicherà la variazione direttamente all'ENEL, alla SIT S.p.A. e all'Ente Poste Italiane. [The municipality will update ex officio all matters within its jurisdiction (registry office, authorisations, tributes, communications to pension authorities and to the Provincial Health Services Agency), install the plaque indicating the house number and communicate the change directly to ENEL, SIT S.p.A. and the Italian Post Office.] Table or in-domain terms (e.g., anagrafe [civil registry], tributi [tributes], enti pensionistici [pension authorities], Azienda Provinciale per i Servizi Sanitari [Provincial Health Services Agency]), which may have affected the pair ranking. Since sentences in Admin-It RD were manually aligned after simplification was performed at the document level, the annotators could better identify the information needed to be added or made explicit. Probably these sentences underwent more insertions than those in Adminit RS . When the simplification is operated directly at the sentence level, in fact, it is more difficult to understand which information to add, since the context is missing. This subset of Admin-It contains sentences from However, despite being in-domain, SimPA does not always help. For example, for sentence pairs containing Reorderings, the NPRM fine-tuned only on SimPA got the lowest RA. This can be explained by the fact that in more than half of the corpus only lexical level simplifications were performed. As also observed by We also analyze the scores obtained on sentence pairs with transformations involving verbal features. Here, the NPRM fine-tuned on OSE is the best, also reaching high scores when adding SimPA or NewsEs+SimPA to the data used for fine-tuning. However, using only SimPA results in the lowest scores in this set. This could be explained by the ARA experiments using OSE performed by Despite our best efforts, we cannot easily explain the performance of the NPRMs on sentence pairs with other operations. However, our analysis already offers some insights into how the models behave, serving as a first step for a more comprehensive study to be carried out in future work. In this paper, we investigated the behavior of a Neural Pairwise Ranking Model (NPRM) for assessing the readability of sentences from the Italian administrative language in zero-shot settings. To deal with data scarcity in this domain, we built Admin-It, a corpus of original-simplified parallel sentences in the Italian administrative language, containing three different styles of simplifications. This corpus allowed us to prove that NPRMs are effective in cross-domain and cross-lingual zeroshot settings, especially when simplifications were produced over single sentences and at several linguistic levels. We also conduced an error analysis and showed that the characteristics of the data used for fine-tuning rather than its size have an impact on a model's performance. In addition, we determined that simplifications where information was added are poorly handled by the models. In future work, we plan to analyze how NPRMs perform on sentences with the same simplification style (e.g., Admin-It RS ) annotated for different degrees of complexity by humans. We also plan to improve Admin-It RS to address the needs of specific targets, such as second language learners, who require the insertion of definitions of technical terms (not provided in the current version). To develop ARA models in this setting, we could leverage the alignments of Srikanth and Li (2021) that focus on elaborative simplifications. Furthermore, we plan to fine-tune models with in-domain data from languages with higher proximity to Italian, e.g., with datasets similar to the one built for Spanish by Table B Cross-domain scenario in English We conduced some preliminary experiments on NPRM at the sentence level. Firstly, we fine-tuned and tested the model based on bert-base-uncased on in-domain data, i.e., an English news corpus, OSE. Testing it via 5-Fold cross validation, we obtained a quite high ranking accuracy (0.959) 14 . Then, we analyzed 14 This experiment is also reported in Sec. 4. the model behavior in a cross-domain scenario on English (see Table In Table As described in Section 7.2, we analyzed the results obtained by some of the fine-tuned models on Admin-It OP , the Admin-It subset where the original-simplified pairs of sentences are rewritten by applying only one operation. The models selected for this analysis are those fine-tuned on a single corpus (i.e., Simpitiki W , OSE, NewsEn, NewsEs, and SimPA) and the best performing ones (i.e., NewsEn+NewsEs+OSE, OSE+NewsEs, OSE+NewsEs+SimPA, and OSE+SimPA). Results are reported in In Figure | 1,114 | 1,897 | 1,114 |
A Human-Centric Evaluation Platform for Explainable Knowledge Graph Completion | Explanations for AI are expected to help human users understand AI-driven predictions. Evaluating plausibility, the helpfulness of the explanations, is therefore essential for developing eXplainable AI (XAI) that can really aid human users. Here we propose a human-centric evaluation platform 1 to measure plausibility of explanations in the context of eXplainable Knowledge Graph Completion (XKGC). The target audience of the platform are researchers and practitioners who want to 1) investigate real needs and interests of their target users in XKGC, 2) evaluate the plausibility of the XKGC methods. We showcase these two use cases in an experimental setting to illustrate what results can be achieved with our system. | A Knowledge Graph (KG) is a structured representation of knowledge that captures the relationships between entities. It is composed of triples in the format (subject, relation, object), denoted as t = (s, r, o), where two entities are connected by a specified relation. For example, in the triple (London, isCapitalOf, UK), London and UK are the entities, and isCapitalOf is the relation. These entities can be depicted as nodes in a knowledge graph, while the relation denotes a labeled link connecting the subject to the object. Knowledge graphs are beneficial for many NLP tasks, e.g., fact checking The applicability of KGs in downstream tasks, however, is often limited by their incompleteness the defined entities The embedding based KGC models, however, are black boxes that do not (and cannot) provide explanations of why the model makes a certain prediction. The lack of transparency significantly hampers users' trust and engagement with KGC systems, especially in the high-risk domains, such as medicine We thus target to evaluate what kind of explanations are helpful for the users because ultimately, the explanations should directly aid them. Therefore, it is important to measure the plausibility of the explanations: the extent to which an explanation generated by XAI is comprehensible and beneficial to human users Our evaluation platform offers the following novel contributions. First, it introduces a new evaluation paradigm that assesses how well explanations can assist users in judging the correctness of KGC predictions. In contrast to the prevalent human evaluation paradigm in the literature that requests annotators to simulate AI's behavior With these novel contributions, our evaluation platform can effectively measure plausibility of XKGC methods. Considering the diversity of humans, our system also provides various statistical tools to rigorously and comprehensively analyze the collected feedback for reliable conclusions. Additionally, our evaluation platform aids in identifying genuine requirements from users regarding explanations, thereby it can assist in developing and refining XKGC methods to generate explanations that are centered around human needs. Finally, we formulate our study on human-centric evaluation as practical guidelines, which can be replicated to design evaluations for other use cases in the future. | We build an online system to evaluate XKGCs in a human centric manner. Our system considers the real needs and interests of human users in collaboration with AI, allowing us to investigate: can humans assess correctness of a KGC prediction based on its explanations? Which explanations are helpful for human users? The answers to these questions provide hints for evaluating the ultimate goal of an XAI method: the generated explanations are expected to assist human users in understanding AI-driven predictions. To this end, our platform has two user views: one for researchers to set up a test and the other for testers to give feedback. Researchers can prepare the evaluation study by uploading a JSON file that contains both the predic- tions and possible explanations. Here is an example JSON file including one prediction and its explanation. If the researchers want to evaluate multiple predictions, then they only need to add these predictions in the json file. Each predicted triple is associated with a set of explanation triples. Each explanation triple has a score that indicates their importance, which can be used for filtering and ranking the explanations. This score can e.g. come from the XKGC method. In addition, each prediction has the correct attribute which indicates whether this prediction is correct or not. The false prediction can be viewed as a control setup, which allows us to test whether users can determine if a prediction is correct based on the given explanations. Additionally, it allows us to assess the engagement of testers (see Sec. 5 for details). The probability attribute specifies the likelihood of the predicted triple by the KGC method. After the JSON is uploaded (top-left panel of Figure After the evaluation test has been setup, the researchers can share the link of the online system with the testers to evaluate. The system can work with crowdsourcing websites, e.g. Amazon Mechanical Turk (AMT), to employ testers for human evaluations. Figure The top panels of Figure Once done, the tester can submit the feedback and move on to the next prediction. After the last prediction, we offer the tester an additional form to share any feedback with us. This page can also be used, e.g. to share an identifying code that allows us to utilize the evaluation system with AMT, where the code is used to check completeness and engagement for payment. The system is as a web application consisting of frontend (HTML5/JavaScript) and backend (Python). We will describe the respective components and the data flow (shown in Figure The backend is a Python-based software framework (Flask The frontend is implemented in JavaScript (An-gularJS Figure Our system is deployed on a powerful server with 48 Threads (24 cores), 256 GB memory and 1GB Full-Duplex Internet connection. In theory, it can support more than one thousand testers to visit the evaluation platform. Due to the complexity and costliness of human evaluation, as well as the diversity among human testers, the collected feedback tends to be both limited in quantity and diverse in quality. Consequently, statistical analysis assumes a critical role to draw reliable conclusions from human feedback. We include the following statistical analysis tools in our platform. Power analysis. In human evaluation, there is often an important question: How many testers are necessary to draw a solid conclusion? There is no a universally applicable minimum sample size for obtaining statistically significant results Hypothesis testing. Are the observed results in human feedback statistically significant or simply due to chance? Hypothesis testing, e.g. t-tests, Wilcoxon Signed-Rank test, Mann Whitney test and Brunner-Munzel test, can be employed to measure them. With hypothesis tests, we can distinguish between real effects and random variations in a rigorous manner. Mixed effect analysis. Human feedback is often subject to variability of individual differences, engagement levels, and other random variation. (Linear) mixed effect analysis Correlation Analysis. In addition, correlation analysis can also be applicable to analyze the relations among different metrics. For example, we suggest multiple metrics to quantify plausibility, including: accuracy rate of tester's assessment, confidence of testers, number of helpful explanations, and time cost. Correlation analysis can explore relationships between metrics, and may provide insights into the reliability and validity of the results. Human evaluation can be subject to various biases that may affect the reliability of the conclusions Engagement. Testers often exhibit varying levels of engagement and various thinking modes. To mitigate the impact of tester bias, we propose that each tester assesses ≥ 2 XKGC methods, analyzing the feedback with paired tests, especially when the number of available testers is limited. Additionally, testers' engagement tends to decrease over time. Therefore, it is crucial to impose a constraint on the total evaluation time (e.g. one hour per session). Furthermore, to ensure the testers' proper engagement during the evaluation process, we can randomly assign some straightforward predictions as checkpoints for validation. Equivalency. All testers should evaluate similar set of predictions in a similar order. This is to reduce deviations caused by individual predictions. Diversity. Testers may have the tendency to retain information from previous predictions, which can result in the earlier assessments influencing the later ones. Consequently, we recommend selecting predictions that are as distinct from each other as possible to mitigate this concern. Balance. Predictions should be balanced. Specifically, numbers of correct and erroneous predictions should be similar, and the order of predictions should be random, such that testers cannot simply guess prediction results. Human-understandable benchmark data. The data used in a human evaluation needs to be human understandable, otherwise testers have no clue how to assess predictions and explanations. While a seemingly obvious statement, in practice we found it difficult to find KGC data that satisfies this constraint. In addition, testers recruited for a human evaluation are often lay people, not professionals of an area, thus plain datasets without domain-specific knowledge (such as biology and healthcare) would be better. If the evaluated XKGCs are domain specific, e.g., disease diagnosis, then specialists should accordingly be employed. To demonstrate what results and findings can be acquired with the proposed system, we conducted two evaluations. XAI is human-centric in nature. There is no onefor-all solution to meet all users' expectations. Our human-centric evaluation platform can help the researchers and practitioners interview their users to find: (1) what the users really need for understanding the KGC predictions in their applications, and (2) whether the generated explanations by their methods make sense for their users. We conducted a series of interviews with the evaluation system. A human-understandable KGC dataset was selected as benchmark data. We used the kinship dataset With the evaluation system, we visualized the predictions and their explanations to the testers and interviewed: what will be a helpful explanations for them? and why do they think an explanation helpful? The interview is summarized in Table First, the interviews revealed that the testers often search for "paths" that link the nodes of the predicted triple to the nodes of explanations. See for example the "triangle" explanation in left panel of Figure 5 interviewees: 3 with machine learning background, 2 with good understanding about users of their AI system. Guide A guide is created, including textand video-introduction to the evaluation platform. 1. What will be a helpful explanations for users? 2. Why do users think an explanation helpful? Table Second, testers often find a rather small set of explanations helpful (2-3) and remark that a large number of explanations (e.g. >10) create confusion. Third, often it would be helpful for testers to have additional information from the knowledge graph -but this additional information was not identified by Method A. For example, Method A cannot create an explanation linking four entities, such as in right panel of Figure We also used the evaluation platform to compare two XKGC methods: which would be more helpful for users. The kinship dataset 1 Each tester evaluates 14 predicted triples to keep their engagement. The first two triples serve as practice to facilitate testers understanding and comfort with the system and the questions. The feedback is not included in statistical analysis. 3 The rest of the triples are different from each other. Each is randomly drawn from a unique relation (12 relation types in total in the dataset). Half of triples are correctly or incorrectly predicted to avoid dummy feedback. Paired test is employed. Half of triples are randomly selected for either XAI method. 6 The predicted triples are randomly shuffled. All testers evaluate the same set of predicted triples in the same order for fairness. Table 30 testers are invited to evaluate the predictions, following the steps illustrated in Section 2. We received the feedback from 23 of them. For each tester (anonymous) and each prediction, our platform collected the metrics: accuracy of assessment (denoted as Acc), confidence of assessment, number of helpful explanations (denoted as helpExpl), and time cost. Our platform also provides diverse statistical tools (see Section 4) to analyze the measurements, e.g. the results shown in the bottom panel of Figure Human evaluation has attracted increasing attention in XAI research due to its ultimate goal of aiding human to understand AI predictions. Many evaluation benchmarks are based on simulatability Existing KGC evaluation platforms focus on measurement of prediction performance. For instance, AI explanations only achieve their goal if the explanation is helpful to the human user. To measure this, we present a human-centric evaluation platform in the context of explainable knowledge graph completion. Distinguishing from the simulatabilitybased evaluation, our system assesses how well explanations assist users in judging the correctness of KGC predictions, and thus aligns better with human-AI interaction systems, where AI facilities humans rather than the other way around. To alleviate possible biases, we provide a set of guidelines in experiment design, and diverse analysis tools for reliable conclusions. The experiments demonstrate the findings and results that can be acquired with the proposed system. | 721 | 2,363 | 721 |
Privacy Implications of Retrieval-Based Language Models | Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts by incorporating retrieved text from external datastores. While it is well known that parametric models are prone to leaking private data, it remains unclear how the addition of a retrieval datastore impacts model privacy. In this work, we present the first study of privacy risks in retrieval-based LMs, particularly kNN-LMs. Our goal is to explore the optimal design and training procedure in domains where privacy is of concern, aiming to strike a balance between utility and privacy. Crucially, we find that kNN-LMs are more susceptible to leaking private information from their private datastore than parametric models. We further explore mitigations of privacy risks: When privacy information is targeted and readily detected in the text, we find that a simple sanitization step would eliminate the risks while decoupling query and key encoders achieves an even better utility-privacy trade-off. Otherwise, we consider strategies of mixing public and private data in both datastore and encoder training. While these methods offer modest improvements, they leave considerable room for future work. Together, our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs 1 . | Retrieval-based language models | Email: mailme@alice.com mailme@bob.com mailme@charlie.com … URL: alice@bob.com harry@hogwarts.edu … text passages that are most relevant to the prompt provided to the model. These retrieved results are then utilized as additional information when generating the model's response to the prompt. Retrievalbased language models offer promising prospects in terms of enhancing interpretability, factuality, and adaptability. However, in privacy-sensitive applications, utility usually comes at the cost of privacy leakage. Recent work has shown that large language models are prone to memorizing In this work, we present the first study of privacy risks in retrieval-based language models, with a focus on the nearest neighbor language models (kNN-LMs) We begin our investigation by examining a situation where the creator of the model only adds private data to the retrieval datastore during inference, as suggested by We further explore mitigation strategies for kNN-LMs in two different scenarios. The first is where private information is targeted, i.e., can be easily identified and removed (Section 5). We explore enhancing the privacy of kNN-LMs by eliminating privacy-sensitive text segments from both the datastore and the encoder's training process. This approach effectively eliminates the targeted privacy risks while resulting in minimal loss of utility. We then explore a finer level of control over private information by employing distinct encoders for keys (i.e., texts stored in the datastore) and queries (i.e., 2 Other retrieval-based language models such as RETRO prompts to the language model). Through our experimental analysis, we demonstrate that this design approach offers increased flexibility in striking a balance between privacy and model performance. The second is a more challenging scenario where the private information is untargeted, making it impractical to remove from the data (Section 6). To address this issue, we explore the possibility of constructing the datastore using public datapoints. We also consider training the encoders of the kNN-LM model using a combination of public and private datapoints to minimize the distribution differences between the public data stored in the datastore and the private data used during inference. Despite modest improvements from the methods we explored, the mitigation of untargeted attacks remains challenging and there is considerable room for future work. We hope our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs. In this section, we first review the key components of kNN-LMs (Section 2.1). Then, we discuss the data extraction attacks for language models (Section 2.2). These aspects lay a foundation for the subsequent exploration and analysis of privacy risks related to kNN-LMs. A kNN-LM Encoders Given a vocabulary V, the encoder Enc K or Enc Q performs the task of mapping a given key or query c ∈ V * to a fixed-length vector representation. Typically, this encoding process is accomplished through a trained language model, where Enc K (c) or Enc Q (c) represents the vector hidden representation obtained from the output layer of the language model when provided with the input c. Although in the default kNN-LMs, Enc K and Enc Q are commonly identical functions, we explore different options in the work. Datastore The datastore, {(Enc K (c i ), w i )}, is a key-value store generated by running the encoder Enc K (•) over a corpus of text; Each key is the vector representation Enc K (c i ) for some context c i ∈ V * , and each value w i ∈ V is the ground-truth next word for the leftward context c i . A search index is then constructed based on the key-value store to enable retrieval. Inference At inference time, when predicting the next token for a query x ∈ V * , the model queries the datastore with encoded query Enc Q (x) to retrieve x's k-nearest neighbors N k according to a distance function d(•, •) 3 . Then the model computes a softmax over the (negative) distances, which gives p kNN (y|x), a distribution over the next token, in proportional to: where t is a temperature parameter, and k is a hyper-parameter that controls the number of retrieved neighbors. The prediction is then interpolated with p LM , the prediction from the original LM: where λ is an interpolation coefficient. Prior work The attack consists of two main steps: 1) generating candidate reconstructions by prompting the trained models, and 2) sorting the generated candidates based on a score that indicates the likelihood of being a memorized text. Further details about the attack can be found in Appendix A. While previous research has successfully highlighted the risks associated with data extraction in parametric language models, there remains a notable gap in our understanding of the risks (and any potential benefits) pertaining to retrieval-based language models like kNN-LMs. This study aims to address this gap and provide insights into the subject matter. In this section, we formally describe our problem setup (Section 3.1) and privacy measurements (Section 3.2). We then detail our evaluation setup (Section 3.3). We consider a scenario where a service provider (e.g. a financial institution) aims to enhance its customer experience by developing a kNN-LM and deploying it as an API service. Note that the development of kNN-LMs intended solely for personal use (e.g., constructing a kNN-LM email autocompleter by combining a public LM with a private email datastore) falls outside the scope of our study because it does not involve any attack channels that could be exploited by potential attackers. We assume that the service provider possesses its own private data (D private ) specific to its domain, in addition to publicly available data (D public ). We identify two key design choices which impact the quality and privacy of such a deployed service. First, the service provider chooses which data to be included in its datastore, and this may be public data (D public ), private data (D private ), or a mix of both. Second, they choose whether to use encoders that are pre-trained on publicly available data (Enc public ), or further finetuned on the private data (Enc private ). We posit that careful consideration of these design choices is needed to establish a balance between privacy preservation and utility. The service provider in such a scenario is concerned with making a useful API, while keeping their private data hidden from malicious users or attackers. Hence, the service provider's objective is to attain a high level of utility (as measured by perplexity) on a held-out set of D private while simultaneously minimizing the disclosure of private information. We quantify the metrics we consider for privacy in Section 3.2. We now describe how we evaluate the risk of data extraction attack within the scenario described earlier in Section 3.1. Threat model We assume that the service provider deploys a kNN-LM with API access to p(y|x). This API provides the attacker with the capability to compute perplexity, conduct text completion, and perform other relevant tasks. However, it's important to note that the attacker is restricted from accessing the internal parameters or the datastore of the deployed model. Our study considers two types of privacy risks, each associated with a particular type of attack: Targeted attacks We define targeted risk as a privacy risk that can be directly associated with a segment of text (e.g., personal identifiers such as addresses and telephone numbers.) and propose the targeted attack. The significance of a targeted attack becomes apparent when considering that targeted risks have been explicitly addressed in various privacy regulations (Centers for • We firstly detect all unique personal identifiers in the private dataset, denoted as {ρ i } p i=1 ; • We then sort the reconstruction candidates based on the membership metrics defined in Appendix A, and only keep the top-n candidates {c i } n i=1 ; • Finally, we detect {ρ i } q i=1 , the unique PIIs in the top-n candidates, and then count |{ρ i } p i=1 ∩ {ρ i } q i=1 |, namely how many original PIIs have been successfully reconstructed by the attack. A larger number means higher leakage of private PIIs. The untargeted attack is the case where the attacker aims to recover the entire training example, rather than a specific segment of text. Such attacks can potentially lead to the theft of valuable private training data. We adopt the attack proposed by • We firstly sort the reconstruction candidates based on the membership metrics defined in Appendix A, and only keep the top-n candidates {c i } n i=1 ; • For each candidate c i , we then find the closest example in the private dataset p i and compute the ROUGE-L score Note that the attack's performance evaluation employs the private dataset following established reconstruction attack practices, the attack itself never utilizes this dataset. Our main evaluation uses the Enron Email dataset We pre-process the Enron Email dataset by retaining only the email body (see Table This section presents our investigation of whether the addition of private data to the retrieval datastore during inference is an effective method for achieving a good trade-off between privacy (mea-sured by metrics defined in Section 3.2) and utility (measured by perplexity) in kNN-LMs. We are particularly interested in three scenarios: utilizing only Enc public (the publicly pretrained language model), utilizing only Enc private (the model fine-tuned from Enc public using private data), and utilizing Enc public with D private (the combination of the public model with the private datastore). As shown in Table When it comes to kNN-LMs, incorporating a private datastore (D private ) with a public model (Enc public ) yields even greater utility compared to relying solely on the fine-tuned model (Enc private ). However, this utility improvement also comes at the expense of increased privacy leakage. These findings suggest that the privacy concern stemming from the private datastore outweighs that resulting from the privately fine-tuned model, indicating a lack of robust privacy protection in the design of kNN-LMs. Additionally, we note that the combination of Enc private and D private achieves the highest utility but also incurs the highest privacy cost. Our previous findings indicate that the personalization of kNN-LMs with a private datastore is more susceptible to data extraction attacks compared to fine-tuning a parametric LM with private data. At the same time, leveraging private data offers substantial utility improvements. Is there a more effective way to leverage private data in order to achieve a better balance between privacy and utility in kNN-LMs? In this section we focus on addressing privacy leakage in the context of targeted attacks (see definition in Section 3.2), where the private information can be readily detected from text. We consider several approaches to tackle these challenges in Section 5.1 and Section 5.2, and present the results in Section 5.3. We also investigate the effect of hyper-parameters in Section 5.4. As demonstrated in Section 4, the existence of private examples in the kNN-LMs' datastore increase the likelihood of privacy leakage since they are retrieved and aggregated in the final prediction. Therefore, our first consideration is to create a sanitized datastore by eliminating privacy-sensitive text segments. We note that this verbatim level definition of "privacy leakage" is general and widely adopted. Notably, regulations such as HIPAA (Centers for Medicare & Medicaid Services, 1996) and CCPA (California State Legislature , 2018) offer explicit definitions of privacy-sensitive data. Consequently, these regulatory frameworks can serve as the basis for establishing the verbatim-level definition of "privacy leakage". For example, HIPAA defines 18 identifiers that are considered personally identifiable information (PII), including names, addresses, phone numbers, etc. We propose the following three options for sanitization: • Replacement with < |endoftext| >: replace each privacy-sensitive phrase with the < |endoftext| > token; • Replacement with dummy text: replace each privacy-sensitive phrase with a fixed dummy phrase based on its type. For instance, if telephone numbers are sensitive, they can be replaced with "123-456-789"; and • Replacement with public data: replace each privacy-sensitive phrase with a randomly selected public phrase of a similar type. An example is to replace each phone number with a public phone number on the Web. The encoders in a kNN-LM is another potential source of privacy leakage. While it is typically optimized on target domain data to enhance performance, fine-tuning directly on private data in privacy-sensitive tasks may result in privacy leaks (Table We propose using separate encoders for keys and queries in kNN-LMs, to allow for finer control over privacy preservation. For example, the encoder for queries can be the sanitized encoder, while the encoder for keys can be the non-sanitized one; This way, the query encoder can be more resistant to privacy leakage, while the keys encoder can provide better query results. While it is not a common practice in kNN-LMs, we view the separation of key and query encoders as a promising approach to reduce the discrepancy between the prompt and the datastore, and reduce privacy leakage. The privacy risk of a kNN-LM can also be impacted by its hyper-parameters such as the number of neighbors k, and the interpolation coefficient λ. It is important to consider these hyper-parameters in the customization of the kNN-LMs to ensure that the privacy-utility trade-off is well managed. As demonstrated in Table We finally analyze the impact of key hyperparameters on utility and privacy risks in kNN-LMs, using D private as datastore and Enc private for both Enc K and Enc Q . First, we vary λ, the inter- polation coefficient, and observe that increasing λ decreases perplexity but increases privacy risk (see Figure In this section, we explore potential methods to mitigate untargeted risks in kNN-LMs, which is a more challenging setting due to the opacity of the definition of privacy. It is important to note that the methods presented in this study are preliminary attempts, and fully addressing untargeted risks in kNN-LMs still remains a challenging task. Considering that storing D private in the datastore is the primary cause of data leakage (as discussed in Section 4), and the challenge of sanitizing private data in the face of untargeted risks, we propose the following approaches to leverage public data for mitigating these risks. Adding public data to datastore The quality of the retrieved neighbors plays a crucial role in the performance and accuracy of kNN-LMs. Although it is uncommon to include public datapoints that are not specifically designed for the task or domain into kNN-LMs' datastore, it could potentially aid in reducing privacy risks in applications that prioritize privacy. This becomes particularly relevant in light of previous findings, which suggest substantial privacy leakage from a private datastore. Fine-tuning encoders on private data with DP-SGD Differentially private stochastic gradient descent (DP-SGD) Fine-tuning encoders on a mixture of public and private data However, adding public data can potentially lead to a decrease in retrieval performance as there is a distribution gap between the public data (e.g., Web Crawl data) used to construct the datastore and the private data (e.g., email conversations) used for encoder fine-tuning. To address this issue, we propose further fine-tuning the encoder on a combination of public and private data to bridge the distribution gap and improve retrieval accuracy. The ratio for combining public and private datasets will be determined empirically through experimentation. Similarly to Section 5.2, we could also employ separate encoders for keys and queries in the context of untargeted risks, which allows for more precise control over privacy preservation. We mainly present our findings using the Enron Email dataset. In Appendix B, we provide results from the Medical Transcriptions dataset, and those findings align with our main findings. Table Using a public datastore reduces privacy risk but also results in a sudden drop in utility. If more stringent utility requirements but less strict privacy constraints are necessary, adding a few private examples to the public datastore, as shown in Table Table We also note that fine-tuning the encoder using DP-SGD only helps slightly reduce the extraction risk, despite the relatively strict privacy budget ε = 10.0. This is because due to the existence of a private datastore, each inference query in the kNN-LM process incurs supplementary privacy costs, leading to the final kNN-LM model not satisfying the ε-Differential Privacy criteria. We further try fine-tuning the encoder using a combination of public and private data, which results in Enc mixed . The training dataset comprises the entire set of private data of size N priv and N priv × r public data, where r takes values from {0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0}. We present attack results using r = 0.05 as it achieves the best perplexity. As shown in Table 7 Related Work Retrieval-based language models Language models have been shown to tend to memorize Although previous research has demonstrated the potential risks of data extraction in parametric language models, our study is the first investigation of the privacy risks associated with retrieval-based language models; we also propose strategies to mitigate them. The closest effort is This work presents the first study of privacy risks in retrieval-based language models, specifically focusing on kNN-LMs. Our objective is to investigate designs and training methodologies for kNN-LMs that strike a better privacy-utility trade-off. There are several conclusions from our investigation. First, our empirical study reveals that incorporating a private datastore in kNN-LMs leads to increased privacy risks (both targeted and untargeted) compared to parametric language models trained on private data. Second, for targeted attacks, our experimental study shows that sanitizing kNN-LMs to remove private information from both the datastore and encoders, and decoupling the encoders for keys and queries can eliminate the privacy risks without sacrificing utility, achieving perplexity of We discuss the limitations of this work as follows. • The current study mainly demonstrates the privacy implications of nearest neighbor language models, but there are many other variants of retrieval-based language models, such as RETRO • In the current study, we use WikiText-103 as the public domain for Enron Email, and PubMed-Patients for Medical Transcriptions. While we believe that these choices of public datasets are realistic, it is important to recognize that this selection may restrict the generalizability of our findings. We acknowledge this limitation and leave the exploration of alternative options for the public dataset as a direction for future work. • Furthermore, an unexplored aspect of our study is the potential combination of proposed strategies, such as decoupling keys and query encoders, with more diverse privacytechniques. A.1 Untargeted Attack The attacker generates candidates for reconstructions via querying the retrieval-augmented LM's sentence completion API with contexts. Following Carlini et al. ( Sort candidates by calibrated perplexity The second step is to perform membership inference on candidates generated from the previous step. We are using the calibrated perplexity in our study, which has been shown to be the most effective membership metric among all tested ones by The perplexity measures how likely the LM is to generate a piece of text. Concretely, given a language model f θ and a sequence of tokens x = x 1 , . . . , x l , Perplexity(f θ , x) is defined as the exponentiated average negative log-likelihood of x: A low perplexity implies a high likelihood of the LM generating the text; For a retrieval-augmented LM, this may result from the LM has been trained on the text or has used the text in its datastore. However, perplexity may not be a reliable indicator for membership: common texts may have very low perplexities even though they may not carry privacy-sensitive information. Previous work Prompts for the targeted attack We gather common preceding context for telephone numbers, email addresses, and URLs, and use them as prompts for the targeted attack. Table Attack parameters For the untargeted attack, we generate 100,000 candidates, and for the targeted attack, we generate 10,000 candidates. We use beam search with repetition penalty = 0.75 for the generation. Here, ε ∈ R >0 , δ ∈ [0, 1) are privacy parameters quantifying the privacy guarantee of the algorithm. DP Stochastic Gradient Descent (DP-SGD) It then clips the gradient ℓ 2 -norm to a maximum ℓ 2 -norm of C: Finally, it produces the private gradient ĝt by injecting Gaussian noise into the sum of the clipped per-example gradients: where (0, σ 2 C 2 I) is a Gaussian distribution with mean 0 and covariance σ 2 C 2 I, and the noise multiplier σ is computed from (ε, δ) by inverse privacy accounting (e.g., We also evaluate whether DP can mitigate extraction risks in kNN-LMs. Specifically, we fine-tune the pre-trained LM on the private dataset with DP-SGD. We vary the privacy budget ε and fix the failure probability δ to be 1/N , where N is the number of training examples. It's important to acknowledge that due to the utilization of a private datastore, each inference query in the kNN-LM Example #1: PAST MEDICAL HISTORY:, He has difficulty climbing stairs, difficulty with airline seats, tying shoes, used to public seating, and lifting objects off the floor. He exercises three times a week at home and does cardio. He has difficulty walking two blocks or five flights of stairs. Difficulty with snoring. He has muscle and joint pains including knee pain, back pain, foot and ankle pain, and swelling. He has gastroesophageal reflux disease... Example #2: HISTORY OF PRESENT ILLNESS: ,This is a 55-year-old female with a history of stroke, who presents today for followup of frequency and urgency with urge incontinence. This has been progressively worsening, and previously on VESIcare with no improvement. She continues to take Enablex 50 mg and has not noted any improvement of her symptoms. The nursing home did not do a voiding diary. She is accompanied by her power of attorney... Example #3: EXAM: , Ultrasound examination of the scrotum.,REASON FOR EXAM: , Scrotal pain.,FINDINGS: ,Duplex and color flow imaging as well as real time gray-scale imaging of the scrotum and testicles was performed. The left testicle measures 5.1 x 2.8 x 3.0 cm. There is no evidence of intratesticular masses. There is normal Doppler blood flow. The left epididymis has an unremarkable appearance. There is a trace hydrocele... Example #4: TESTICULAR ULTRASOUND,REASON FOR EXAM: ,Left testicular swelling for one day.,FINDINGS: ,The left testicle is normal in size and attenuation, it measures 3.2 x 1.7 x 2.3 cm. The right epididymis measures up to 9 mm. There is a hydrocele on the right side. Normal flow is seen within the testicle and epididymis on the right.,The left testicle is normal in size and attenuation, it measures 3.9 x 2.1 x 2.6 cm... Example #5: PHYSICAL EXAMINATION: , The patient is a 63-year-old executive who was seen by his physician for a company physical. He stated that he was in excellent health and led an active life. His physical examination was normal for a man of his age. Chest x-ray and chemical screening blood work were within normal limits. His PSA was elevated.,IMAGING:,Chest x-ray: Normal.,CT scan of abdomen and pelvis: No abnormalities... process incurs supplementary privacy costs, leading to the final kNN-LM model not satisfying the (ε, δ)-Differential Privacy criteria. As demonstrated in Table We primarily showcase our findings using the Enron Email dataset in the main paper, as its inclusion of personally identifiable information (PII) enables us to effectively evaluate both targeted and untargeted attacks. To validate our findings, we hereby replicate our experiments specifically for untargeted attacks on the Medical Transcriptions dataset. The Medical Transcriptions dataset The preliminary findings presented in Table We also observe on the Medicial Transcriptions dataset that separating the key and query encoders yields better results in striking a favorable trade-off between privacy and utility. As shown in Table | 1,389 | 31 | 1,389 |
MULTIHIERTT: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data | Numerical reasoning over hybrid data containing both textual and tabular content (e.g., financial reports) has recently attracted much attention in the NLP community. However, existing question answering (QA) benchmarks over hybrid data only include a single flat table in each document and thus lack examples of multistep numerical reasoning across multiple hierarchical tables. To facilitate data analytical progress, we construct a new large-scale benchmark, MULTIHIERTT, with QA pairs over Multi Hierarchical Tabular and Textual data. MULTIHIERTT is built from a wealth of financial reports and has the following unique characteristics: 1) each document contain multiple tables and longer unstructured texts; 2) most of tables contained are hierarchical; 3) the reasoning process required for each question is more complex and challenging than existing benchmarks; and 4) fine-grained annotations of reasoning processes and supporting facts are provided to reveal complex numerical reasoning. We further introduce a novel QA model termed MT2Net, which first applies facts retrieving to extract relevant supporting facts from both tables and text and then uses a reasoning module to perform symbolic reasoning over retrieved facts. We conduct comprehensive experiments on various baselines. The experimental results show that MULTIHIERTT presents a strong challenge for existing baselines whose results lag far behind the performance of human experts. The dataset and code are publicly available at | In recent years, as key to many NLP tasks such as QA, there is a flurry of works on numerical reasoning over various types of data including textual data However, existing QA datasets over hybrid data only contain a single flat table in each document To address these shortcomings, we present MUL-TIHIERTT: an expert-annotated dataset that contains 10,440 QA pairs, along with annotations of reasoning processes and supporting facts. To the best of our knowledge, MULTIHIERTT is the first dataset for solving complicated QA tasks over documents containing multiple hierarchical tables and paragraphs. In addition, to address the challenge of MULTIHIERTT, we propose MT2Net to first retrieve supporting facts from financial reports then generate executable reasoning programs to answer the questions. Our experiments show that MT2Net outperforms all other baselines and achieves 38.43% F1 score. However, all models still lag far behind the performance of human experts with 87.03% in F1. It demonstrates MUL-TIHIERTT presents a strong challenge for existing baseline models and is a valuable benchmark for future research. The main contribution of this work can be summarized as follows: • We propose a new large-scale dataset MULTI-HIERTT. It contains 10,440 examples along with fully annotated numerical reasoning processes and supporting facts. A strict quality control procedure is applied to ensure the meaningfulness, diversity, and correctness of each annotated QA example. • Compared with existing datasets, each document in MULTIHIERTT contains multiple hierarchical tables and longer unstructured text. A more complex reasoning process across multiple tables and paragraphs is required to correctly answer the question. • We propose a novel QA model, MT2Net. The model first applies facts retrieving to extract relevant supporting facts from both hierarchical tables and text. And it then uses a reasoning module to reason over retrieved facts. • Comprehensive experiments are conducted on various baselines. The experimental results demonstrate that the current QA models still lag far behind the human expert performance, and further research is needed. | Question Answering Benchmark There are numerous QA datasets focusing on text, table/knowledge base (KB), and hybrid data. SQuAD Numerical Reasoning Numerical reasoning plays an important role in different NLP tasks Financial NLP Financial NLP has attracted much attention recently. There have been various application in different tasks like risk management MULTIHIERTT are deployed based on the FinTab-Net dataset How much is the sum of stock purchase rights in 2018 lower than those in 2017? How many years were the sales and client service expenses higher than software development expenses? How much of US corporate debt securities is there in total (in 2009) without consider gross unrealized gain and gross unrelized loss? How many financing activities continues to increase every year from 2017 to 2021? Which types of fuel emission allowance sales exceed 16 % of total in CIPS? Which year does the supply chain revenues have the largest proportion to the total? In which section the sum of trading non-derivative assets has the highest value? If expected return on assets develops with the same growth rate in 2010, what will it reach in 2011? If salaries and wages needs to make up 40% of the total benefits, what is the difference between the target value and the actual value? When does net investment income reach the peak value? When does the restructuring costs exceed the average value? formats can be extracted and post-processed according to HTML tags. The raw data is filtered as follows: First, we extract documents with 1 to 4 pages and 2 to 6 tables from FinTabNet. Second, we filter out the documents with limited textual contents. Third, as we aim for the numerical reasoning ability, we also exclude documents with tables containing little numerical information. Then, we use a pre-processing script to extract the hierarchical structure of each HTML-format table. And we ignore those tables that cannot be handled by the pre-processing script. As a result, a total of 4,791 documents were selected for further annotation. For each document selected in §3.1, the annotators are required to compose one or two QA examples along with detailed annotation. The process of annotating each QA example is as follows: 1) The annotators are first asked to compose a complex question that requires numerical reasoning and is meaningful for helping novices understand the annual reports. The annotators are encouraged to compose questions that require the information from both the textual and tabular content or from multiple tables. 2) For those questions requiring numerical expression, the annotators are then asked to write down the reasoning programs to answer the question. In detail, the annotators are asked to elaborate on the operation steps to answer the question. The definitions of all operations are shown in Table Strict quality control procedures are designed to ensure the quality of dataset annotation, especially the diversity and meaningfulness of proposed questions. The human evaluation scores and interevaluator agreements are reported in Table Annotation De-Biasing As suggested in previous research To further ensure the diversity and correctness of proposed questionreasoning pairs, each document is assigned to three annotators and one verifier in order. For annotators, each is required to first validate the previous annotator's annotation and fix the mistakes if there are. Then, they are asked to create one or two more question-reasoning pairs that are different from the existing ones. After each annotator finishes tasks, we assign another verifier with good performance on this project to validate all the annotations. Core statistics of MULTIHIERTT are reported in Table Compared with TAT-QA and FinQA, documents in MULTIHIERTT contain longer unstructured input text and multiple tables, making the evidence retrieving and reasoning more challenging. And MULTIHIERTT has diverse and complex questions, as illustrated in Figure We also analyze supporting facts coverage for each question. In MULTIHIERTT, 1) 10.24% of the questions only require the information in the paragraphs to answer; 2) 33.09% of the questions only require the information in one To address the challenge of MULTIHIERTT, we propose a framework named MT2Net. Figure Fact Retrieving Module The whole input text in each document of MULTIHIERTT can exceed 3,000 tokens and contain many numbers, which is beyond the capability of the current popular QA models The following table presents product and service sales and operating expenses by segment (dollar in millions): Product sales for 2018 increased $4.3 billion, or 25 percent, as compared with 2017. The increase was primarily due to the addition of $2.9 billion of product sales from Innovation Systems and higher restricted and F-35 volume at Aerospace Systems. Approximately $26.6 billion of the $53.5 billion total at December 31, 2018 is expected to be converted into sales in 2019. 1. The funded Aerospace Systems in 2017 was 9560. 2. The funded Mission Systems in 2017 was 9277. 3. Approximately $26.6 billion of the $53.5 billion total at December 31, 2018 is expected to be converted into sales in 2019. bi-classifier Then they serve as input to reasoning module. We first use pre-trained LMs to encode the retrieved sentences from the facts retrieving module. Then, we divide the answers into two types: arithmetic program and span. For each answer type, we use a unique sub-module to calculate the conditional answer probability P (answer|type): Program sub-module: The structure is similar with the program generator of FinQANet Span sub-module: The span sub-module aims to select the predicted span candidate, which is a span of retrieved sentences. The answer probability is defined as the product of the probabilities of the start and end positions in the retrieved evidence. Meanwhile, an extra output layer is used to predict the probability P (type) of each answer type. In particular, we take the output vector [CLS] from LMs as input to compute the probability. In the training stage, the final answer probability is defined as the joint probability over all feasible answer types, i.e., type P (type) × P (answer|type). Here, both P (type) and P (answer|type) is learned by the model. In the inference stage, the model first selects the most probable answer type and then uses corresponding sub-modules to predict the answer. TAGOP TAGOP FinQANet FinQANet Different from ours, FinQANet ignores the hierarchical structure of tables when linearizing each row of a table. And it is not designed to answer span selection questions. Longformer + Reasoning module To demonstrate the necessity of breaking up models into facts retrieving and reasoning modules, we directly use the pre-trained Longformer-base Fact Retrieving Module + TAPAS We employ TAPAS (MASKLM-base) Fact Retrieving + NumNet NumNet+ Fact Retrieving Module + Seq2Prog A Seq2Prog architecture adopted from baseline of MathQA dataset For the fact retrieving module, we use BERT-base as the classifier. Since most of the examples in our dataset have less than 7 supporting facts (89.3%), and we find that longer inputs might lower the performance of the reasoning module, we take the top-10 retrieving facts as the retriever results. For the reasoning module, we experiment on using BERT For Evaluation Metrics, following TAT-QA To test the performance of the human expert on MULTIHIERTT, we invite another two professionals. We randomly sampled 60 examples from the test set, and ask them to answer the questions individually within three hours. The results are reported in the last row of Table Table Necessity of applying retrieving-reasoning pipeline Directly using an end-to-end pretrained Longformer model to replace a retrieving module falls far behind. This makes sense because longer input contains much irrelevant numerical information, which makes the reasoning module difficult to learn. structure Both TAGOP and FinQANet perform worse than MT2Net because they ignore the table's hierarchical structure in the retrieving part. Different from ours, which flatten each cell with its header hierarchical structures, both TAGOP and FinQANet flatten each table by rows, losing the table's hierarchical structure information. Necessity of an effective reasoning module Most questions in MULTIHIERTT require models to perform multi-step reasoning and integrate different kinds of operators. Generally, the reasoning module generating reasoning programs to get answers performs better than directly generating answers by end-to-end method, i.e. adopted TAPAS. Both adopted NumNet and TAGOP perform much worse than MT2Net because they only support limited symbolic reasoning. Specifically, TAGOP can only perform with a single type of pre-defined aggregation operator for each question, and NumNet only supports addition and subtraction operators when performing symbolic reasoning. By contrast, MT2Net performs better than FinQANet and Seq2Prog because it applies different sub-modules to answer questions with different answer types. The results also show that larger pre-trained models have better performance. This is because they are pre-trained on more financial corpus. However, all the models perform significantly worse than human experts, indicating MULTIHIERTT is challenging to state-of-the-art QA models and there is a large room for improvements for future research. To guide the future directions of model improvement, various performance breakdown experiments on the test set are conducted using the MT2Net (RoBERTa-large) model. Table We further investigate the proposed MT2Net by analyzing error cases. We randomly sample 100 error cases from the results of the MT2Net (RoBERTa-large) model on the test set, and classify them into four main categories as shown in Table 6, along with examples. The analysis shows that around 64% error (Wrong Operand/Span+Missing Operand) is caused by the failure to integrate the supporting facts correctly. Meanwhile, the current model fails to integrate external finance knowledge to answer questions. Although the proposed MT2Net model outperforms other baseline models, it still performs significantly worse than human experts, which reflects the challenge of MULTIHIERTT. Primarily, we find that models do not perform well on certain types of questions: 1) questions requiring reasoning across multiple tables; 2) questions requiring multi-step reasoning; 3) questions requiring reasoning over tables with complex hierarchical structures; and 4) questions requiring external financial knowledge. To deal with these challenges, we believe that four main directions of work may be workable: 1) designing a specialized module to handle multitable reasoning; 2) decomposing a complex question requiring multi-step reasoning into several simpler sub-questions that QA models can handle We have proposed MULTIHIERTT, a new largescale QA dataset that aims to solve complicated QA tasks that require numerical reasoning over documents containing multiple hierarchical tables and paragraphs. To address the challenge of MULTI-HIERTT, we introduce a baseline framework named MT2Net. The framework first retrieves supporting facts from financial reports and then generates executable reasoning programs to answer the question. The results of comprehensive experiments showed that current QA models (best F 1 : 38.43%) still lag far behind the human expert performance (F 1 : 87.03%). This motivates further research on developing QA models for such complex hybrid data with multiple hierarchical tables. Data in MULTIHIERTT is collected from the FinQA dataset For the internal annotation of MULTIHIERTT, each expert is paid $20 per hour. For the external annotation, we hire 23 graduate students majoring in finance or similar disciplines. We regard creating one question-reasoning pair, or validating one document's annotation as a unit task. And we pay around $1.1 for each unit task. Averagely, an annotator can finish 7 unit tasks per hour after training and practicing. And the hourly rates are in the range of $6 and $9 based on the different working speed (above the local average wage of similar jobs). In total, the approximate working hours to annotate MULTIHIERTT dataset is 1500 hours. The whole annotation work lasts about 70 days. | 1,501 | 2,165 | 1,501 |
Sentiment Analysis on Streaming User Reviews via Dual-Channel Dynamic Graph Neural Network | Sentiment analysis on user reviews has achieved great success thanks to the rapid growth of deep learning techniques. The large number of online streaming reviews also provides the opportunity to model temporal dynamics for users and products on the timeline. However, existing methods model users and products in the real world based on a static assumption and neglect their time-varying characteristics. In this paper, we present DC-DGNN, a dual-channel framework based on a dynamic graph neural network that models temporal user and product dynamics for sentiment analysis. Specifically, a dual-channel text encoder is employed to extract current local and global contexts from review documents for users and products. Moreover, user review streams are integrated into the dynamic graph neural network by treating users and products as nodes and reviews as new edges. Node representations are dynamically updated along with the evolution of the dynamic graph and used for the final prediction. Experimental results on five real-world datasets demonstrate the superiority of the proposed method. | Sentiment analysis on user reviews, inferring the overall sentiment polarity (e.g. 1-5 stars on the review site Amazon) of a user-written review document for a product, has gained popularity with the rapid growth of online review sites such as Amazon, Yelp, and IMDB. Compared to other sentiment analysis tasks For sentiment analysis on user reviews, early methods incorporate user and product embeddings by training randomly initialized embedding with the classifier Reiew #2 09 03, 2016 I read his latest book, Fool Me Once, which I not only disliked, I thought it was terrible. It is written in a fastpaced style that makes it a very quick read. Unfortunately, that's the one and only good thing I can say about it. Reiew #1 04 Reiew #3 10 03, 2016 I really enjoyed this book. The flow was very good and the writing was well done. I'm also glad that the majority of the book was set years after the "dark times", in which crime was nearly all-consuming. Reiew #1 04 20, 2015 Figure 1: An illustration of Streaming Reviews from a user or to a product. The left part indicates an example of a user who drafts the reviews with the positive words but scores lower with time. The right part indicates an example for a product that receives reviews with negative words but is scored higher with time. Therefore, in this paper, we propose a new study on sentiment analysis that focuses on streaming user reviews, which takes chronological reviews as a stream and aims to predict the rating score with the dynamic user and product representations. Since modeling pairwise relations between users and products in a graph has been proven useful in sentiment analysis on user reviews We evaluate the proposed DC-DGNN on five self-built datasets. The experimental results show that DC-DGNN outperforms all existing state-ofthe-art methods and the motivation for modeling the temporal dynamics of users and products is verified. | Incorporating user and product information into models through reviews is the main idea of user review sentiment analysis methods. Recently, to exploit more knowledge from useruser or user-product relations, Amplayo et al. (2018) introduces shared vectors that are constructed from similar users/products to address the cold start problem. Previous graph representation learning mainly focused on static graphs with a predefined number of nodes and edges. Real-world graphs, on the other hand, typically change over time as their graph structures change with the nodes and edges coming in or disappearing. In order to tackle real-world situations, Continuous Time Dynamic Graph Neural Network In sentiment analysis on streaming user reviews, reviews are sorted in chronological order in the form of E = E 1 , . . . , E T . Each of them is denoted as , where t i is timestamp for the review d i , u i is the user who wrote the review d i and p i is the product being reviewed. The task aims to predict the user's rating y towards the product under current condition E t based on the historical information E 1 , . . . , E t-1 , and learns a mapping function between user's rating y and condition E t , denoted as y = f(E t | E 1 , . . . , E t-1 ). Table We propose a Dual-Channel Dynamic Graph Neural Network (DC-DGNN) to model the dynamics of the streaming review data, as shown in Figure Earlier research has highlighted the necessity to differentiate between users and products to create distinct representations where h i is a d-dimensional feature vector for representing corresponding tokens x i . We denote text representation as H. Then we follow the subsequent steps to obtain global and local context. Channel 1: Self-Attention Mechanism for User's Global Context. We first project the document representation H into into three parts, key K, query Q, and value V . The global context for users is then obtained by means of a self-attention mechanism: where Q u , K u , V u are transformed from H through the linear layer. Channel 2: Convolution for Product's Local Context. DynamicConv where V p is transformed from H by a shared projection with Channel 1, and Depth -conv k V p is a traditional depthwise convolutional function, which formulas as follows: where the output corresponds to the calculation results of the document's i-th element of output channel c. In the end, the weight of different convolution cells is automatically selected through a gating mechanism for the product's local context: Our proposed dual-channel dynamic graph updating component is primarily inspired by JODIE Given a set of reviews with N users and M products, we first create two embedding lookup tables for both users and products as E u = u 1 , . . . , u N and E p = p 1 , . . . , p M , which also act as a current representation storage database. At time 0, each user u and product p in the database is initialized randomly from a uniform distribution into an r-dimensional vector u(0) ∈ R r and p(0) ∈ R r . When new input arrives, the representation stored in the database is taken out as u t - and p t -and added to the updating process. After this, the updated information will be renewed to the database as u(t) and p(t). Note that we only maintain the most recent representation of the users and products in these two databases. The entire updating process is designed as a dual-channel structure of user-part and productpart, where the user part contains a projection module and an updating module, and the product part only has an updating module. User Part. The first part for the user is a Projection Module to process temporal projections. We assume that user preference is continuously shifting even when there are no review actions. Therefore, for user embeddings after the elapsed time ∆ u , we perform the following projection function: where w ∈ R r is a time-context vector converted from ∆ u , and the larger ∆ u is, the more the projected embedding vector differs from the input embedding vector. Later, a Next Product Prediction (NIP) cell, which predicts the product that the user is likely to review, is proposed to enhance the representation of the user: where p -t -is product embedding before time t corresponding to the product from u's previous review, and W 1 ∈ R r×r , W 2 ∈ R r×r and B ∈ R r are trainable parameters in linear layer. For the NIP unit, our aim is to minimize the difference between the predicted product embedding p(t -+ ∆ u ) and the real product embedding p(t -+ ∆ p ), where ∆ p is the time difference between the current review and the last review for product p, which is different from ∆ u . Since we assume that the products do not change during the time interval, here p(t -+ ∆ p ) equals p(t -). Thus, the loss function can be represented as: where we use the L2 loss function to push the predicted product representation closer to the true product representation. The second part for the user is an Update Module, implemented based on an RNN cell, which generates the updated representation of the user after this review. It takes the projected user embedding u t -+ ∆ u , the previous product embedding p t -, and the user's time interval ∆ u as input, then integrates them into the current user representation: where W u 1 ∈ R r+1+d ×r , W u 2 ∈ R r×r and b u ∈ R r are parameters of user's update RNN cell. Through these two modules, the user's representation flows smoothly from t -to t. Product Part. The product part is similar to the user side, except that there is no projection module, as we consider the product to be static for the duration. In contrast to the user part, the positions of u t -+ ∆ u and p t -are switched, and the context used for product updating is H p : where W p 1 ∈ R r+1+d ×r , W p 2 ∈ R r×r and b p ∈ R r are parameters of product's update RNN cell. In order to ensure the quality of user and product embeddings, we apply the L2 loss between t -and t for users and products to prevent sudden changes in continuous time: where u t and u t -represent the user's previous and current representation respectively, while p corresponds to the product. For score prediction at time t, we calculate crossentropy loss between the predicted score y and true score y: where y = MLP u t ⊕p t ⊕H Finally, our training objective can be formulated as follows: where λ 1 , λ 2 and λ 3 as well as λ 4 are tradeoff parameters for each loss. We construct 5 datasets for sentiment analysis on streaming user reviews to evaluate the performance of our model and promote the development of this research. We first collect 5-core data from the Ama- We adopt BERT (base-uncased) The experimental results on the datasets from May 1996 to October 2018 are shown in the first four rows of Table 1) On all five datasets, the performance of the Bert-based model (Bert-Sequence, IUPC, and our DC-DGNN) is better than that of the Glove-based model (BiLSTM+Att, NGSAM, and CHIM), which proves that a high-quality feature extractor is still a necessity for sentiment analysis. 2) Our DC-DGNN model outperforms all other baselines on 5 datasets, confirming the superiority of DC-DGNN in modeling user and product temporal dynamics. Meanwhile, compared to JODIE, our DC-DGNN has significant improvements, which shows that we have successfully adapted the dynamic graph structure to our sentiment analysis task. 3) Comparing the results on the first four datasets with the last dataset, we observe that our model shows convincible and considerable performance when datasets have a relatively longer timespan, which indicates that our DC-DGNN has the potential to model on more real-world ever-increasing streaming datasets. In order to verify the impact of the modules proposed in this paper, ablation experiments were designed as follows: Anyone interested in this genre will most likely enjoy his books, several of which have been made into films. I am not a particularly big fan of such books, but have read a few, including several of his earlier ones. Four stars interesting and entertaining, but not deeply meaningful. I recommend that you read his novels in the order in which they were written and continue for as long as you find them interesting. I decided to read Paula Hawkins The Girl on the Train because it received quite good critical reviews. But after about 50 pages I found it to be depressing and difficult to follow since the story keeps switching among various people and dates. I skipped to the end and read the last 25 pages just to see how it ended. Rachel is divorced from Tom and is both depressed and an alcoholic. As the plot unfolds the characters interact with each other in negative ways to the tragic ending. acteristics in distribution and domain, so it is unreasonable to set the same embedding dimension for modeling on all datasets. To this end, we conduct a dimension exploration experiment to compare the impact of different dimension sizes on 5 datasets. Overall in Figure Figure In this section, we will discuss the time efficiency comparison of our proposed DC-DGNN and other models. We conduct a time test to compare our model's performance with that of IUPC and CHIM, which also learn user and product embeddings simultaneously. As shown in In this paper, we present novel research on sentiment analysis on streaming user reviews and propose a dual-channel dynamic graph neural network, namely DC-DGNN, to model the temporal dynamics of users and products. DC-DGNN dynamically updates user and product representations through the dual-channel evolution of the graph. On our 5 self-constructed datasets, by comprehensive evaluations and ablation study, we confirm superiority of our DC-DGNN and the impact of its modules. Through additional analytical experiments, we further demonstrate the importance of modeling user and product dynamics, hence verifying the conjecture in this paper. Although our model has shown excellent performance in sentiment analysis on streaming user reviews, we still believe it has some limitations: • Comparing datasets with longer timespan and shorter timespan, we find that improvement is not noticeable for the datasets with shorter timespan, which is a limitation for analysis only with short-term data. At the same time, our DC-DGNN model is also not friendly for datasets with sparse user/product information. • In early experimental attempts, for the structure built directly from JODIE, we found significant differences in prediction performance at different times. When making predictions on data with a relatively recent time, the performance is great, and when the time is farther away, the performance is sharply decreased. • The current structure we proposed considers forward integration and ignores backpropagation. In fact, the addition of subsequent reviews will also have an impact on past node representation. At the same time, we also ignore the high-order correlations between user-user and product-product. For example, user and user can be connected through a second-order homogeneous graph. The abovementioned more refined design in graph updating may be our future improvement direction. Dataset construct details. Firstly, the data was subjected to a cleaning operation where we removed duplicates, as well as the data where their texts are empty, and then removed special characters from the texts. Later, the processing steps to build the datasets for sentiment analysis on streaming user reviews are as follows: • Step1: Sort users by the number of reviews they have made, then filter the data containing the top 500 users from the original data according to their review counts. • Step2: For the data obtained in the first step, sort the products according to the number of reviews they received, and filter the data containing the top 500 products. • Step3: Normalise the data. For time processing, convert it to timestamp format and subtract the smallest timestamp in the dataset. For users and products, map the unique identifier to a numeric number. • Step4: Finally, the data are sorted by timestamp to fit in our chronological setting. The reason why Text-based model is necessary here. Although many previous methods that incorporate user and product information have shown strong potential, however, in some cases, they not only fail to learn high-quality representations but have counterproductive effects on score prediction. For example, in some datasets, only a few products are widely reviewed, and the number of reviews received by different products shows an imbalance. In fact, this data imbalance is very common in the real world, and even if we try to avoid it when constructing datasets, it still exists to some extent. Therefore, in this paper, we still take text-based model as a strong baseline. Experimental adjustments for adapting JODIE to sentiment analysis tasks. The original JODIE structure first divides the data by timespan and then further divides the data by t-batch. T-batch is a strategy to prevent the overlapping of users and items in each batch. However, we believe this strategy may be inappropriate as it disrupts the original time order, which contradicts the chronological setup. To address this, we only divide the data into multiple time periods according to timespan and abandon the t-batch strategy to ensure the original order of the data. Additionally, JODIE has a problem with gradients vanishing during testing. As JODIE was designed for recommendation systems, it keeps gradients propagation during testing, which is not ideal for our sentiment analysis task. To solve this, we propose a strategy that saves and updates the user and product representations in the databases during training. And during the testing phase, we only take the representations out of the database without updating them. In JODIE, there is also a small experimental problem. We find that its static embedding setting conflicts with successful training. JODIE sets onehot vectors as static embedding for each user and item in a rude way, which is not friendly to the situation where there are a large number of users and items, and will directly cause calculation overload and make it impossible to train. Therefore, in this paper, we deal with this problem by directly abandoning static embedding. Based on the unified dynamic graph library implemented by | 1,097 | 1,918 | 1,097 |
Combining the Best of Two Worlds: A Hybrid Approach to Multilingual Coreference Resolution | We describe our system for the CoNLL-2012 shared task, which seeks to model coreference in OntoNotes for English, Chinese, and Arabic. We adopt a hybrid approach to coreference resolution, which combines the strengths of rule-based methods and learningbased methods. Our official combined score over all three languages is 56.35. In particular, our score on the Chinese test set is the best among the participating teams. | The CoNLL-2012 shared task extends last year's task on coreference resolution from a monolingual to a multilingual setting Our decision to adopt a hybrid approach is motivated by the observation that rule-based methods and learning-based methods each have their unique strengths. As shown by the Stanford coreference resolver Our system employs a fairly standard architecture, performing mention detection prior to coreference resolution. As we will see, however, the parameters of these two components are optimized jointly with respect to the desired evaluation measure. In the rest of this paper, we describe the mention detection component (Section 2) and the coreference resolution component (Section 3), show how their parameters are jointly optimized (Section 4), and present evaluation results on the development set and the official test set (Section 5). | To build a mention detector that strikes a relatively good balance between precision and recall, we employ a two-step approach. First, in the extraction step, we identify named entities (NEs) and employ language-specific heuristics to extract mentions from syntactic parse trees, aiming to increase our upper bound on recall as much as possible. Then, in the pruning step, we aim to improve precision by employing both language-specific heuristic pruning and language-independent learning-based pruning. Section 2.1 describes the language-specific heuristics for extraction and pruning, and Section 2.2 describes our learning-based pruning method. English. During extraction, we create a candidate mention from a contiguous text span s if (1) s is a PRP or an NP in a syntactic parse tree; or (2) s corresponds to a NE that is not a PERCENT, MONEY, QUANTITY or CARDINAL. During pruning, we remove a candidate mention m k if (1) m k is embedded within a larger mention m j such that m j and m k have the same head, where the head of a mention is detected using Collins's (1999) rules; (2) m k has a quantifier or a partitive modifier; or (3) m k is a singular common NP, with the exception that we retain mentions related to time (e.g., "today"). Chinese. Similar to English mention extraction, we create Chinese mentions from all NP and QP nodes in syntactic parse trees. During pruning, we remove a candidate mention m k if (1) m k is embedded within a larger mention m j such that m j and m k have the same head, except if m j and m k appear in a newswire document since, unlike other document annotations, Chinese newswire document annotations do consider such pairs coreferent; (2) m k is a NE that is a PERCENT, MONEY, QUANTITY and CARDINAL; or (3) m k is an interrogative pronoun such as " 什么 [what]", " 哪儿 [where]". Arabic. We employ as candidate mentions all the NPs extracted from syntactic parse trees, removing those that are PERCENT, MONEY, QUANTITY or CARDINAL. While the heuristic pruning method identifies candidate mentions, it cannot determine which candidate mentions are likely to be coreferent. To improve pruning (and hence the precision of mention detection), we employ learning-based pruning, where we employ the training data to identify and subsequently discard those candidate mentions that are not likely to be coreferent with other mentions. Specifically, for each mention m k in the test set that survives heuristic pruning, we compute its mention coreference probability, which indicates the likelihood that the head noun of m k is coreferent with another mention. If this probability does not exceed a certain threshold t C , we will remove m k from the list of candidate mentions. Section 4 discusses how t C is jointly learned with the parameters of the coreference resolution component to optimize the coreference evaluation measure. We estimate the mention coreference probability of m k from the training data. Specifically, since only non-singleton mentions are annotated in OntoNotes, we can compute this probability as the number of times m k 's head noun is annotated (as a gold mention) divided by the total number of times m k 's head noun appears. If m k 's head noun does not appear in the training set, we set its coreference probability to 1, meaning that we let it pass through the filter. In other words, we try to be conservative and do not filter any mention for which we cannot compute the coreference probability. Table Like the mention detection component, our coreference resolution component employs heuristics and machine learning. More specifically, we employ Stanford's multi-pass sieve approach A sieve is composed of one or more heuristic rules. Each rule extracts a coreference relation between two mentions based on one or more conditions. For example, one rule in Stanford's discourse processing sieve posits two mentions as coreferent if two conditions are satisfied: (1) they are both pronouns; and (2) they are produced by the same speaker. Sieves are ordered by their precision, with the most precise sieve appearing first. To resolve a set of mentions in a document, the resolver makes multiple passes over them: in the i-th pass, it attempts to use only the rules in the i-th sieve to find an antecedent for each mention m k . Specifically, when searching for an antecedent for m k , its candidate antecedents are visited in an order determined by their positions in the associated parse tree Our sieves for English are modeled after those employed by the Stanford resolver Recall that for Chinese we participated in both the closed track and the open track. The sieves we employ for both tracks are the same, except that we use NE information to improve some of the sieves in the system for the open track. 1. Chinese Head Match sieve: Recall from Section 2 that the Chinese newswire articles were coreference-annotated in such a way that a mention and its embedding mention can be coreferent if they have the same head. To identify these coreference relations, we employ the Same Head sieve, which posits two mentions m j and m k as coreferent if they have the same head and m k is embedded within m j . There is an exception to this rule, however: if m j is a coordinated NP composed of two or more base NPs, and m k is just one of these base NPs, the two mentions will not be considered coreferent (e.g., 查尔斯和戴安娜 [Charles and Diana] and 戴安娜 [Diana]). 3. Pronouns sieve: The Pronouns sieve resolves pronouns by exploiting grammatical information such as the gender and number of a mention. While such grammatical information is provided to the participants for English, the same is not true for Chinese. To obtain such grammatical information for Chinese, we employ a simple method, which consists of three steps. First, we employ simple heuristics to extract grammatical information from those Chinese NPs for which such information can be easily inferred. For example, we can heuristically determine that the gender, number and animacy for 她 [she] is {Female, Single and Animate}; and for 它们 [they] is {Unknown, Plural, Inani-mate}. In addition, we can determine the grammatical attributes of a mention by its named entity information. For example, a PERSON can be assigned the grammatical attributes {Unknown, Single, Animate}. Next, we bootstrap from these mentions with heuristically determined grammatical attribute values. This is done based on the observation that all mentions in the same coreference chain should agree in gender, number, and animacy. Specifically, given a training text, if one of the mentions in a coreference chain is heuristically labeled with grammatical information, we automatically annotate all the remaining mentions with the same grammatical attribute values. Finally, we automatically create six word lists, containing (1) animate words, (2) inanimate words, (3) male words, (4) female words, (5) singular words, and (6) plural words. Specifically, we populate these word lists with the grammatically annotated mentions from the previous step, where each element of a word list is composed of the head of a mention and a count indicating the number of times the mention is annotated with the corresponding grammatical attribute value. We can then apply these word lists to determine the grammatical attribute values of mentions in a test text. Due to the small size of these word lists, and with the goal of improving precision, we consider two mentions to be grammatically incompatible if for one of these three attributes, one mention has an Unknown value whereas the other has a known value. As seen in Table We only employ one sieve for Arabic, the exact match sieve. While we experimented with additional sieves such as the Head Match sieve and the Pronouns sieve, we ended up not employing them because they do not yield better results. As mentioned before, we improve the sieve approach by incorporating lexical information. To exploit lexical information, we first compute lexical probabilities. Specifically, for each pair of mentions m j and m k in a test text, we first compute two probabilities: (1) the string-pair probability (SP-Prob), which is the probability that the strings of the two mentions, s j and s k , are coreferent; and (2) the head-pair probability (HP-Prob), which is the probability that the head nouns of the two mentions, h j and h k , are coreferent. For better probability estimation, we preprocess the training data and the two mentions by (1) downcasing (but not stemming) each English word, and (2) replacing each Arabic word w by a string formed by concatenating w with its lemmatized form, its Buckwalter form, and its vocalized Buckwalter form. Note that SP-Prob(m j ,m k ) (HP-Prob(m j ,m k )) is undefined if one or both of s j (h j ) and s k (h k ) do not appear in the training set. Next, we exploit these lexical probabilities to improve the resolution of m j and m k by presenting two extensions to the sieve approach. The first extension aims to improve the precision of the sieve approach. Specifically, before applying any sieve, we check whether SP-Prob(m j ,m k ) ≤ t SP L or HP-Prob(m j ,m k ) ≤ t HP L for some thresholds t SP L and t HP L . If so, our resolver will bypass all of the sieves and simply posit m j and m k as not coreferent. In essence, we use the lexical probabilities to improve precision, specifically by positing two mentions as not coreferent if there is "sufficient" information in the training data for us to make this decision. Note that if one of the lexical probabilities (say SP-Prob(m j ,m k )) is undefined, we only check whether the condition on the other probability (in this case HP(m j ,m k ) ≤ t HP L ) is satisfied. If both of them are undefined, this pair of mentions will survive this filter and be processed by the sieve pipeline. The second extension, on the other hand, aims to improve recall. Specifically, we create a new sieve, the Lexical Pair sieve, which we add to the end of the sieve pipeline and which posits two mentions m j and m k as coreferent if SP-Prob(m j ,m k ) ≥ t SP U or HP-Prob(m j ,m k ) ≥ t HP U . In essence, we use the lexical probabilities to improve recall, specifically by positing two mentions as coreferent if there is "sufficient" information in the training data for us to make this decision. Similar to the first extension, if one of the lexical probabilities (say SP-Prob(m j ,m k )) is undefined, we only check whether the condition on the other probability (in this case HP(m j ,m k ) ≥ t HP U ) is satisfied. If both of them are undefined, the Lexical Pair sieve will not process this pair of mentions. The four thresholds, t SP L , t HP L , t SP U , and t HP U , will be tuned to optimize coreference performance on the development set. As discussed before, we learn the system parameters to optimize coreference performance (which, for the shared task, is U avg, the unweighted average of the three commonly-used evaluation measures, MUC, B 3 , and CEAF e ) on the development set. Our sys-tem has two sets of tunable parameters. So far, we have seen one set of parameters, namely the five lexical probability thresholds, t C , t SP L , t HP L , t SP U , and t HP U . The second set of parameters contains the rule relaxation parameters. Recall that each rule in a sieve may be composed of one or more conditions. We associate with condition i a parameter λ i , which is a binary value that controls whether condition i should be removed or not. In particular, if λ i =0, condition i will be dropped from the corresponding rule. The motivation behind having the rule relaxation parameters should be clear: they allow us to optimize the hand-crafted rules using machine learning. This section presents two algorithms for tuning these two sets of parameters on the development set. Before discussing the parameter estimation algorithms, recall from the introduction that one of the distinguishing features of our approach is that we build genre-specific resolvers. In other words, for each genre of each language, we (1) learn the lexical probabilities from the corresponding training set; (2) obtain optimal parameter values Θ 1 and Θ 2 for the development set using parameter estimation algorithms 1 and 2 respectively; and (3) among Θ 1 and Θ 2 , take the one that yields better performance on the development set to be the final set of parameter estimates for the resolver. Parameter estimation algorithm 1. This algorithm learns the two sets of parameters in a sequential fashion. Specifically, it first tunes the lexical probability thresholds, assuming that all the rule relaxation parameters are set to one. To tune the five probability thresholds, we try all possible combinations of the five probability thresholds and select the combination that yields the best performance on the development set. To ensure computational tractability, we allow each threshold to have the following possible values. For t C , the possible values are -0.1, 0, 0.05, 0.1, . . ., 0.3; for t SP L and t HP L , the possible values are -0.1, 0, 0.05, 0.15, . . ., 0.45; and for t SP U and t HP U , the possible values are 0.55, 0.65, . . ., 0.95, 1.0 and 1.1. Note that the two threshold values -0.1 and 1.1 render a probability threshold useless. For example, if t C = -0.1, that means all mentions will survive learning-based pruning in the mention detection component. As another example, if t SP U and t HP U are both 1.1, it means that the String Pair sieve will be useless because it will not posit any pair of mentions as coreferent. Given the optimal set of probability thresholds, we tune the rule relaxation parameters. To do so, we apply the backward elimination feature selection algorithm, viewing each condition as a feature that can be removed from the "feature set". Specifically, all the parameters are initially set to one, meaning that all the conditions are initially present. In each iteration of backward elimination, we identify the condition whose removal yields the highest score on the development set and remove it from the feature set. We repeat this process until all conditions are removed, and identify the subset of the conditions that yields the best score on the development set. Parameter estimation algorithm 2. In this algorithm, we estimate the two sets of parameters in an interleaved, iterative fashion, where in each iteration, we optimize exactly one parameter from one of the two sets. More specifically, (1) in iteration 2n, we optimize the (n mod 5)-th lexical probability threshold while keeping the remaining parameters constant; and (2) in iteration 2n+1, we optimize the (n mod m)-th rule relaxation parameter while keeping the remaining parameters constant, where n = 1, 2, . . ., and m is the number of rule relaxation parameters. When optimizing a parameter in a given iteration, the algorithm selects the value that, when used in combination with the current values of the remaining parameters, optimizes the U avg value on the development set. We begin the algorithm by initializing all the rule relaxation parameters to one; t C , t SP L and t HP L to -0.1; and t SP U and t HP U to 1.1. This parameter initialization is equivalent to the configuration where we employ all and only the hand-crafted rules as sieves and do not apply learning to perform any sort of optimization at all. The results of our Full coreference resolver on the development set with optimal parameter values are shown in Table A few points regarding the results in Table Table To get a better sense of the usefulness of the probability thresholds, we show in Tables 8 and 9 some development set examples of correctly and incorrectly identified/pruned mentions and coreferent/non-coreferent pairs for English and Chinese, respectively. Note that no Chinese examples for t C are shown, since its tuned value cor- responds to the case where no mentions should be pruned. We presented a multilingual coreference resolver designed for the CoNLL-2012 shared task. We adopted a hybrid approach to coreference resolution, which combined the advantages of rule-based methods and learning-based methods. Specifically, we proposed two extensions to Stanford's multi-pass sieve approach, which involved the incorporation of lexical information using machine learning and the acquisition of genre-specific resolvers. Experimental results demonstrated the effectiveness of these extensions, whether or not they were applied in isolation or in combination. In future work, we plan to explore other ways to combine rule-based methods and learning-based methods for coreference resolution, as well as improve the performance of our resolver on Arabic. | 421 | 863 | 421 |
Question-Answering in a Low-resourced Language: Benchmark Dataset and Models for Tigrinya | Question-Answering (QA) has seen significant advances recently, achieving near human-level performance over some benchmarks. However, these advances focus on high-resourced languages such as English, while the task remains unexplored for most other languages, mainly due to the lack of annotated datasets. This work presents a native QA dataset for an East African language, Tigrinya. The dataset contains 10.6K question-answer pairs spanning 572 paragraphs extracted from 290 news articles on various topics. The dataset construction method is discussed, which is applicable to constructing similar resources for related languages. We present comprehensive experiments and analyses of several resource-efficient approaches to QA, including monolingual, cross-lingual, and multilingual setups, along with comparisons against machine-translated silver data. Our strong baseline models reach 76% in the F1 score, while the estimated human performance is 92%, indicating that the benchmark presents a good challenge for future work. We make the dataset, models, and leaderboard publicly available. 1 | Question Answering (QA) and Machine Reading Comprehension (MRC) have seen significant advances in recent years, achieving human-level performance on large-scale benchmarks This work presents TiQuAD, the first publicly available Question-Answering Dataset for Tigrinya; see Figure We assess the quality of annotations and explore strong baselines by fine-tuning TiRoBERTa and TiELECTRA The contributions of this work are summarized as follows: (1) We build the first questionanswering dataset for Tigrinya and make it publicly available. (2) We present an in-depth analysis of the challenges of question answering in Tigrinya based on the dataset. (3) We apply transformer-based language models to the question-answering task in Tigrinya and compare it with datasets of other languages. (4) We investigate various resourceefficient cross-lingual and multilingual approaches to QA and assess the utility of the native dataset. 2 Related Work 2.1 Tigrinya Language Tigrinya (ISOv3: tir) is a Semitic language, part of the Afro-Asiatic family with over 10 million native speakers in the East African regions of Eritrea and Northern Ethiopia. Tigrinya is closely related to Amharic and Tigre languages that are also spoken in similar regions and share the same ancestor, the now extinct Ge'ez language. In recent years, there is a growing research body and interest in Tigrinya. | Native reading comprehension datasets beyond the English language are relatively rare. Efforts have been made to build MRC datasets in Chinese, French, German, and Korean, among others, all of which are designed following the formulation of SQuAD. The SberQuAD dataset Cross-lingual Question Answering Languagespecific datasets are costly and challenging to build, and one alternative is to develop cross-lingual models that can transfer to a target without requiring training data in that language Multilingual Question Answering The MLQA dataset Translated QA datasets Another relatively inexpensive alternative to building a native annotated QA dataset is translating an existing English dataset to the target language. Carrino et al. ( TiQuAD is designed following the task formulation of SQuAD The dataset was constructed in four stages: First, a diverse set of articles are collected from which we extract paragraphs that will serve as contexts. Second, the initial question and answer pairs are annotated for all the extracted paragraphs. Third, additional answers are annotated for all the questions in the development and test sets. Fourth, we post-process the annotations for quality control and remove noisy examples. The final dataset contains over 10.6K question-answer pairs across 572 paragraphs. While the size is on the smaller end compared to the English datasets, it reflects a realistic amount of data that researchers of low-resourced languages can acquire with a limited annotation budget. In the absence of sufficient Tigrinya content on Wikipedia In the first round of annotation, we recruited eight native speakers of Tigrinya [4 female, 4 male] with ages ranging from 20 to 46. Each annotator is presented with a random paragraph from the collection and tasked to write questions that can be explicitly answered by a contiguous segment of text in the provided context. The annotators were encouraged to phrase questions in their own words instead of copying words from the context and to highlight the minimal span of characters that answer the respective question. The annotators were asked to spend on average one minute for each question and answer pair. The end result of this stage is the set of 6,674 unique questions across all documents. In the second round of annotation, we asked four of the original annotators to provide a second answer to questions in the development and test parts of the dataset. Our annotation tool ensures that annotators cannot give a second answer to the questions they contributed already in the second stage. Finally, we recruited two new annotators to provide a third reference answer to all the questions in the evaluation sets. These annotators were not involved in the first round of the annotation; with no prior exposure to the task, they are expected to show less bias towards the question formulation. We ensure that all entries in the test and development sets have at least three answers from different annotators, resulting in 6,205 answers for 2,056 questions. Throughout the annotation campaign, we collected over 6,674 unique questions and 10,600 answers, i.e., 2,056 of the questions had at least three groundtruth answers by different annotators. From these annotations, we discarded 166 entries (2.5%) that either contained apparent errors, were incomplete, unanswerable by the context, or had a wrong question formulation such as verification (yes/no) and cloze type. For instance, the question "]≈nÛs ³M³y ¿b ƒsOE• ƒs³t 28 ¾‹ °r nX ¶Ω ___ rJc tr¼b ~[Lower Shmangus is located about 28 km from Asmara on the ___ side.]" is in cloze format, hence deleted. We also removed outlier entries that had answers with more than 250 characters. To assess the quality and diversity of the dataset, we perform various analyses of the annotations. We clustered all questions in the development set into nine types using a manually curated list of question words. As presented in for ≈72% of all the questions. These types of questions also make up the largest proportions in other datasets The degree of lexical overlap between questions and paragraphs might affect the difficulty of a dataset. To assess this behavior in TiQuAD, we analyzed 100 random samples from the development set and assigned them to four categories of question-context-answer linguistic relationships proposed by The results of our findings are presented in Table We randomly selected 100 question-answer pairs from the validation set to assess the accuracy and length of the answers manually. We specifically check whether each annotated answer is correct and has a minimal length in answering the corresponding question. We observe that 74% of the answers are accurate and with a minimum span length, while a significant minority, 23%, contain extra information and are longer by a factor of 1.5 on average than the desired span. Only 3% were shorter than the optimal span length, such as partial annotation of the answer. The We assess the human performance on TiQuAD's development and test sets, where each question has at least three answers. In SQuAD, We analyzed the cases where the human annotators failed to agree and observed that they are mainly due to extra tokens in the answer spans rather than fundamental differences. Given a question Q and a context paragraph P from an entry in a QA dataset, the training objective is to predict the start and end positions of the answer span within the paragraph. Following The score for a candidate span (i, j) is defined as the product of the start and end position probabilities, and then the highest-scoring span where j ≥ i is used as the final prediction. Score(i, j) = P start (i) • P end (j). (3) The loss function L is the sum of the negative log-likelihoods of the ground truth start and end positions, denoted as i * and j * , respectively. During training, a gradient-based optimizer minimizes the loss and gradually enables the model to accurately predict the answer spans in the context. We use the standard Exact Match (EM) and F1 metrics for evaluation. EM is the percentage of predictions that exactly match the ground truth. F1 score is the average overlap between the predicted tokens and the ground truth, hence rewards partial matches. For both metrics, when there are multiple ground truth answers for a given question in the test set, the final score represents the highest overlap between the prediction and all the reference answers. To improve the robustness of the evaluation, SQuAD We designed six experimental configurations and evaluated each on four models of varying sizes, ranging from 14 to 278 million parameters. Details of the models are presented in Table (2) Zero-shot cross-lingual setting: We investigate transfer learning by training models on an English dataset and evaluating them on Tigrinya -treating QA as a language-independent task; and (3) Multilingual setting: We investigate models trained on combined Tigrinya and English QA datasets and evaluated in a native setup. In all experiments, we use AdamW Translation of English dataset For the experiments, we machine translated the training part of SQuAD v1.1 to Tigrinya. The positional information of the answer spans needs to be computed as it is generally lost during translation, making it difficult to retain the original data size. As a remedy, we applied two machine translation services In this section, we present and discuss the results of the proposed experimental setups. 6.1 End-to-end Tigrinya QA In this setup, we train all models on the native and translated Tigrinya datasets then evaluate on the TiQuAD development and test sets. The experimental results are presented in Table When comparing models of comparable sizes, we observe that the monolingual models achieve better performance than their multilingual counterparts. As shown in Table We investigate the transferability of QA models in a zero-shot setting by training on the high-resource language English and evaluate them on Tigrinya. The multilingual models, AfriBERTa BASE and XLM-R BASE , trained on the English SQuAD1.1 achieve 32-34% in F1 score on the TiQuAD test set and outperform their monolingual counterparts. While the models show promising results in transferring the task between two linguistically distant languages, those trained on the small native dataset remain vastly superior. Table In this setup, we train the models on combined English and Tigrinya training datasets, exposing the models to both languages, then evaluate on the native TiQuAD. We observe a consistent improvement in performance across all models in contrast to the previous setups. For instance, XLM-R BASE in the multilingual setup obtains an increase of over three points in F1 score, setting the state-of-the-art on the TiQuAD test set at 68.06% EM and 76.58% F1 score. Our experiments show that the transfer of models from high to low resourced languages is a viable approach to mitigate the scarcity of annotated datasets. In our case, the benefit emerges when the native and translated Tigrinya datasets are combined with their English counterpart. In this work, we presented the Tigrinya Question Answering Dataset (TiQuAD). The context paragraphs were collected from high-quality News articles of diverse genres, and we collaborated with native speakers to annotate over 6.5K unique questions and 10.6K answers. The development and test sets were further enriched with additional answers to enable a robust evaluation. We conducted comprehensive experiments in monolingual, crosslingual, and multilingual settings. The estimated human performance on the test set is 81.3% EM and 92.1% F1 score, while the top performing model achieves 68.06% EM and 76.58% F1, leaving a room for future improvements. There are two known limitations of the SQuADlike annotation approach we used in this work: (1) It can result in higher lexical-overlap between the context and question pairs. (2) It leads to proportionally fewer truly information-seeking questions This research adheres to the academic and professional ethics guidelines of our university. Our annotation task was approved by the Institutional Review Board (IRB) | 1,096 | 1,373 | 1,096 |
Coreference in Wikipedia: Main Concept Resolution | Wikipedia is a resource of choice exploited in many NLP applications, yet we are not aware of recent attempts to adapt coreference resolution to this resource. In this work, we revisit a seldom studied task which consists in identifying in a Wikipedia article all the mentions of the main concept being described. We show that by exploiting the Wikipedia markup of a document, as well as links to external knowledge bases such as Freebase, we can acquire useful information on entities that helps to classify mentions as coreferent or not. We designed a classifier which drastically outperforms fair baselines built on top of state-of-the-art coreference resolution systems. We also measure the benefits of this classifier in a full coreference resolution pipeline applied to Wikipedia texts. | Coreference Resolution (CR) is the task of identifying all mentions of entities in a document and grouping them into equivalence classes. CR is a prerequisite for many NLP tasks. For example, in Open Information Extraction (OIE) Most CR systems, including state-of-the-art ones It is now widely accepted that coreference resolution systems trained on newswire data performs poorly when tested on other text genres Wikipedia is a large, multilingual, highly structured, multi-domain encyclopedia, providing an increasingly large wealth of knowledge. It is known to contain well-formed, grammatical and meaningful sentences, compared to say, ordinary internet documents. It is therefore a resource of choice in many NLP systems, see While being a ubiquitous resource in the NLP community, we are not aware of much work conducted to adapt CR to this text genre. Two notable exceptions are Our main contribution in this work is to revisit the task initially discussed in More specifically, we frame this task as a binary classification problem, where one has to decide whether a detected mention refers to the MC. Our classifier exploits carefully designed features extracted from Wikipedia markup and characteristics, as well as from Freebase; many of which we borrowed from the related literature. We show that our approach outperforms stateof-the-art generic coreference resolution engines on this task. We further demonstrate that the integration of our classifier into the state-of-the-art rule-based coreference system of The paper is organized as follows. We discuss related works in Section 2. We describe in Section 3 the Wikipedia-based dataset we exploited in this study, and show the drop in performance of state-of-the-art coreference resolution systems when faced to this corpus. We describe in Section 4 the baselines we built on top of two state-ofthe-art coreference resolution systems, and present our approach in Section 5. We report on experiments we conducted in section 6, and conclude in Section 7. | Our approach is inspired by, and extends, previous works on coreference resolution which show that incorporating external knowledge into a CR system is beneficial. In particular, a variety of approaches One issue with all the aforementioned studies is that inaccuracies often cause cascading errors in the pipeline Dealing specifically with Wikipedia articles, we can directly exploit the wealth of markup available (redirects, internal links, links to Freebase) without resorting to named-entity linking, thus benefiting from much less ambiguous information on mentions. As our approach is dedicated to Wikipedia articles, we used the freely WikiCoref Since most coreference resolution systems for English are trained and tested on ACE We evaluate the systems on the whole dataset, using the v8.01 of the CoNLL scorer 2 Somehow more surprisingly, the rule-based system of The WikiCoref dataset is far smaller than the OntoNotes one; still, the authors paid attention to sample Wikipedia articles of various characteristics: size, topic (people, organizations, locations, events, etc.) and internal link density. Therefore, we believe our results to be representative. Those results further confirm the conclusions in Since there is no system readily available for our task, we devised four baselines on top of two available coreference resolution systems. Given the output of a CR system applied on a Wikipedia article, our goal here is to isolate the coreference chain that represents the main concept. We experimented with several heuristics, yielding the following baselines. B1 picks the longest coreference chain identified and considers that its mentions are those that co-refer to the main concept. The underlying assumption is that the most mentioned concept in a Wikipedia article is the main concept itself. B2 picks the longest coreference chain identified 2 It turns out that, for CR systems, mentions of the MC often are spread over several coreference chains. Therefore we devised two more baselines that aggregate chains, with an expected increase in recall. B3 conservatively aggregates chains containing a mention that exactly matches the MC title. B4 more loosely aggregates all chains that contain at least one mention whose span is a substring of the title. We observed that, for pronominal mentions, those baselines were not performing very well in terms of recall. With the aim of increasing recall, we added to the chain all the occurrences of pronouns found to refer to the MC (at least once) by the baseline. This heuristic was first proposed by Our approach is composed of a preprocessor which computes a representation of each mention in an article as well as its main concept; and a feature extractor which compares both representations for inducing a set of features. We extract mentions using the same mention detection algorithm embedded in We leverage the hyperlink structure of the article in order to enrich the list of predicted mentions with shallow semantic attributes. For each link found within the article under consideration, we look through the list of predicted mentions for all mentions that match the surface string of the link. We assign to them the attributes (entity type, gender and number) extracted from the Freebase entry (if it exists) corresponding to the Wikipedia article the hyperlink points to. This module behaves as a substitute to the named-entity linking pipelines used in other works, such as We use the WikipediaMiner (Milne and Witten, 2008) API for easily accessing any piece of structure (clean text, labels, internal links, redirects, etc) in Wikipedia, and Jena In the end, we represent a mention by three strings (actual mention span, head word, and span up to the head noun), as well as its coarse attributes (entity type, gender and number). Figure We experimented with a few hundred features for characterizing each mention, focusing on the most promising ones that we found simple enough to compute. In part, our features are inspired by coreference systems that use Wikipedia and Freebase as feature sources (see Section 2). These features, along with others related to the characteristics of Wikipedia texts, allow us to recognize mentions of the MC more accurately than current CR systems. We make a distinction between features computed for pronominal mentions and features computed from the other mentions. For each mention, we compute seven families of features we sketch below. base Number of occurrences of the mention span and the mention head found in the list of candidate mentions. We also add a normal-ized version of those counts (frequency / total number of mentions). title, inferred type, name variants, entity type Most often, a concept is referred to by its name, one of its variants, or its type which are encoded in the four first fields of our MC representation. We define four families of comparison features, each corresponding to one of the first four fields of a MC representation (see Figure tag Part-of-speech tags of the first and last words of the mention, as well as the tag of the words immediately before and after the mention in the article. We convert this into 34×4 binary features (presence/absence of a specific combination of tags). main Boolean features encoding whether the MC and the mention coarse attributes matches; also we use conjunctions of all pairs of features in this family. We characterize pronominal mentions by five families of features, which, with the exception of the first one, all capture information extracted from Wikipedia. base The pronoun span itself, number, gender and person attributes, to which we add the number of occurrences of the pronoun, as well as its normalized count. The most frequently occurring pronoun in an article is likely to co-refer to the main concept, and we expect these features to capture this to some extent. main MC coarse attributes, such as NER type, gender, number (see Figure tag Part-of-speech of the previous and following tokens, as well as the previous and the next POS bigrams (this is converted into 2380 binary features). position Often, pronouns at the beginning of a new section or paragraph refer to the main concept. Therefore, we compute 5 (binary) features encoding the relative position (first, first tier, second tier, last tier, last) of a mention in the sentence, paragraph, section and article. distance Within a sentence, we search before and after the mention for an entity that is compatible (according to Freebase information) with the pronominal mention of interest. If a match is found, one feature encodes the distance between the match and the mention; another feature encodes the number of other compatible pronouns in the same sentence. We expect that this family of features will help the model to capture the presence of local (within a sentence) co-references. In this section, we first describe the data preparation we conducted (section 6.1), and provide details on the classifier we trained (section 6.2). Then, we report experiments we carried out on the task of identifying the mentions co-referent (positive class) to the main concept of an article (section 6.3). We compare our approach to the baselines described in section 4, and analyze the impact of the families of features described in section 5. We also investigate a simple extension of Dcoref which takes advantage of our classifier for improving coreference resolution (section 6.4). Each article in WikiCoref was part-of-speech tagged, syntactically parsed and the namedentities were identified. This was done thanks to the Stanford CoreNLP toolkit We trained two Support Vector Machine classifiers During training, we do not use gold mention attributes, but we automatically enrich mentions with the information extracted from Wikipedia and Freebase, as described in Section 5. We focus on the task of identifying all the mentions referring to the main concept of an article. We measure the performance of the systems we devised by average precision, recall and F1 rates computed by a 10-fold cross-validation procedure. We generated baselines for all the systems discussed in Section 3, but found results derived from statistical approaches to be close enough that we only include results of two systems in the sequel: Dcoref Clearly, our approach outperforms all baselines for both pronominal and non-pronominal mentions, and across all metrics. On all mentions, our best classifier yields an absolute F1 increase of 13 points over Dcoref, and 15 points over Scoref. In order to understand the impact of each family of features we considered in this study, we trained various classifiers in a greedy fashion. We started with the simplest feature set (base) and gradually added one family of features at a time, keeping at each iteration the one leading to the highest increase in F1. The outcome of this process for the pronominal mentions is reported in Table The entity type family further improves performance, mainly because it plays a role similar to the inferred type features extracted from Freebase. This indicates that the noun type induced directly from the first sentence of a Wikipedia article is pertinent and can complement the types extracted from Freebase when available or serve as proxy when they are missing. In 2002, the team wore a patch commemorating their inaugural season.. The name Houston Oilers was unavailable to the expansion team... f MC= Johnston Atoll In 1993 , Congress appropriated no funds for the Johnston Atoll Safeguard C mission , bringing it* to an end. g MC= Houston Texans The Houston Texans are a professional American football team based in Houston* , Texas. Finally, the main family significantly increases precision (over 4 absolute points) with no loss in recall. To illustrate a negative example, the resulting classifier wrongly recognizes mentions referring to the town Houston as coreferent to the football team in example (g). We handpicked a number of classification errors and found that most of these are difficult coreference cases. For instance, our best classifier fails to recognize that the mention the expansion team refers to the main concept While identifying all the mentions of the MC in a Wikipedia article is certainly useful in a number of NLP tasks We ran this modified system (called Dcoref++) on the WikiCoref dataset, where mentions were automatically predicted. The results of this system are reported in Table We observe an improvement for Dcoref++ over the other systems, for all the metrics. In particular, Dcoref++ increases by 4 absolute points the CoNLL F1 score. This shows that early decisions taken by our classifier benefit other sieves as well. It must be noted, however, that the overall gain in precision is larger than the one in recall. We developed a simple yet powerful approach that accurately identifies all the mentions that co-refer 6 We use predicted results from 10-fold cross-validation. to the concept being described in a Wikipedia article. We tackle the problem with two (pronominal and non-pronominal) models based on well designed features. The resulting system is compared to baselines built on top of state-of-the-art systems adapted to this task. Despite being relatively simple, our model reaches 89 % in F1 score, an absolute gain of 13 F1 points over the best baseline. We further show that incorporating our system into the Stanford deterministic rule-based system The material used in this study, as well as a (huge) dump of all the mentions in English Wikipedia (version of April 2013) our classifier identified as referring to the main concept, along with information we extracted from Wikipedia and Freebase are available at | 792 | 2,017 | 792 |
Task-oriented Dialogue System for Automatic Diagnosis | In this paper, we make a move to build a dialogue system for automatic diagnosis. We first build a dataset collected from an online medical forum by extracting symptoms from both patients' self-reports and conversational data between patients and doctors. Then we propose a taskoriented dialogue system framework to make the diagnosis for patients automatically, which can converse with patients to collect additional symptoms beyond their self-reports. Experimental results on our dataset show that additional symptoms extracted from conversation can greatly improve the accuracy for disease identification and our dialogue system is able to collect these symptoms automatically and make a better diagnosis. | Automatic phenotype identification using electronic health records (EHRs) has been a rising topic in recent years In general, each EHR contains multiple types of data, including personal information, admission note, diagnose tests, vital signs and medical image. And it is collected accumulatively following a diagnostic procedure in clinic, which involves interactions between patients and doctors and some complicated medical tests. Therefore, it is very expensive to collect EHRs for different diseases. How to collect the information from patient automatically remains the challenge for automatic diagnosis. Recently, due to its promising potentials and alluring commercial values, research about taskoriented dialogue system (DS) has attracted increasing attention in different domains, including ticket booking However, there is a gap to fill for applying DS in disease identification. There are basically two major challenges. First, the lack of annotated medical dialogue dataset. Second, no available DS framework for disease identification. By addressing these two problems, we make the first move to build a dialogue system facilitating automatic information collection and diagnosis making for medical domain. Contributions are two-fold: • We annotate the first medical dataset for dialogue system that consists of two parts, one is self-reports from patients and the other is conversational data between patients and doctors. • We propose a reinforcement learning based framework for medical DS. Experiment results on our dataset show that our dialogue system is able to collect symptoms from patients via conversation and improve the accuracy for automatic diagnosis. | Our dataset is collected from the pediatric department in a Chinese online healthcare community For each patient, we can also obtain the final diagnosis from doctors as the label. For clarity, we term symptoms from self-reports as explicit symptoms while those from conversational data as implicit symptoms. We choose four types of diseases for annotation, including upper respiratory infection, children functional dyspepsia, infantile diarrhea and children's bronchitis. We invite three annotators (one with medical background) to label all the symptom phrases in both self-reports and conversational data. The annotation is performed in two steps, namely symptom extraction and symptom normalization. Symptom Extraction We follow the BIO (begin-in-out) schema for symptom identification (Figure Table Symptom Normalization After symptom expression identification, medical experts manually link each symptom expression to the most relevant concept on SNOMED CT After symptom extraction and normalization, there are 144 unique symptoms identified. In order to reduce the size of action space of the DS, only 67 symptoms with a frequency greater than or equal to 10 are kept. Samples are then generated, called user goal. As we know, each user goal (see Figure A task-oriented DS typically contains three components, namely Natural Language Understanding (NLU), Dialogue Manager (DM) and Natural Language Generation (NLG). NLU detects the user intent and slots with values from utterances; DM tracks the dialogue states and takes system actions; NLG generates natural language given the system actions. In this work, we focus on the DM for automatic diagnosis consisting of two sub-modules, namely, dialogue state tracker (DST) and policy learning. Both NLU and NLG are implemented with template-based models. Typically, a user simulator is designed to interact with the dialogue system At the beginning of each dialogue session, a user simulator samples a user goal from the experiment dataset. At each turn t, the user takes an action a u,t according to the current user state s u,t and the previous agent action a t-1 , and transits into the next user state s u,t+1 . In practice, the user state s u is factored into an agenda A via the user action a u,1 which consists of the requested disease slot and all explicit symptoms. In terms of the symptom requested by the agent during the course of the dialogue, the user will take one of the three actions including True (if the symptom is positive), False (if the symptom is negative), and not sure (if the symptom is not mentioned in the user goal). If the agent informs correct disease, the dialogue session will be terminated as successful by the user. Otherwise, the dialogue session will be recognized as failed if the agent makes incorrect diagnosis or the dialogue turn reaches the maximum dialogue turn T. Markov Decision Process Formulation for Automatic Diagnosis We cast DS as Markov Decision Process (MDP) State S. A dialogue state s includes symptoms requested by the agent and informed by the user till the current time t, the previous action of the user, the previous action of the agent and the turn information. In terms of the representation vector of symptoms, it's dimension is equal to the number of all symptoms, whose elements for positive symptoms are 1, negative symptoms are -1, notsure symptoms are -2 and not-mentioned symp-toms are 0. Each state s ∈ S is the concatenation of these four vectors. Actions A. An action a ∈ A is composed of a dialogue act (e.g., inform, request, deny and confirm) and a slot (i.e., normalized symptoms or a special slot disease). In addition, thanks and close dialogue are also two actions. Transition T . The transition from s t to s t+1 is the updating of state s t based on the agent action a t , the previous user action a u,t-1 and the step time t. Reward R. The reward r t+1 = R(s t , a t ) is the immediate reward at step time t after taking the action a t , also known as reinforcement. Policy π. The policy describes the behaviors of an agent, which takes the state s t as input and outputs the probability distribution over all possible actions π(a t |s t ). Learning with DQN In this paper, the policy is parameterized with a deep Q-network (DQN) , where Q(s , a |θ - i ) is the target network with parameters θ - i from some previous iteration. In practice, the behavior distribution is often selected by an -greedy policy that takes an action a = arg max a Q(s t , a ; θ) with probability 1and selects a random action with probability , which can improve the efficiency of exploration. When training the policy, we use a technique known as experience replay. We store the agent's experiences at each time-step, e t = (s t , a t , r t , s t+1 ) in a fixed size, queue-like buffer D. In a simulation epoch, the current DQN network is updated multiple times (depending on the batch size and the current size of replay buffer) with different batches drawn randomly from the buffer, while the target DQN network is fixed during the updating of current DQN network. At the end of each epoch, the target network is replaced by the current network and the current network is evaluated on training set. The buffer will be flushed if the current network performs better than all previous versions. The max dialogue turn T is 22. A positive reward of +44 is given to the agent at the end of a success dialogue, and a -22 reward is given to a failure one. We apply a step penalty of -1 for each turn to encourage shorter dialogues. The dataset is divided into two parts: 80% for training with 568 user goals and 20% for testing with 142 user goals. The of -greedy strategy is set to 0.1 for effective action space exploration and the γ in Bellman equation is 0.9. The size of buffer D is 10000 and the batch size is 30. And the neural network of DQN is a single layer network. The learning rate is 0.001. Each simulation epoch consists of 100 dialogue sessions and the current network is evaluated on 500 dialogue sessions at the end of each epoch. Before training, the buffer is pre-filled with the experiences of the rule-based agent (see below) to warm start our dialogue system. To evaluate the performance of the proposed framework, we compare our model with baselines in terms of three evaluation metrics following The baselines include: (1) SVM: This model treats the automatic diagnosis as a multi-class classification problem. It takes one-hot representation of symptoms in the user goal as input, and predicts the disease. There are two configurations: one takes both explicit and implicit symptoms as input (denoted as SVM-ex&im), and the other takes only explicit symptoms to predict the disease (denoted as SVM-ex). (2) Random Agent: At each turn, the random agent takes an action randomly from the action space as the response to the user's action. (3) Rule-based Agent: The rule-based agent takes an action based on handcrafted rules. Conditioned on the current dialogue state s t , the agent will inform disease if all the known symptoms related are detected. If no disease can be identified, the agent will select one of the left symptoms randomly to inform. The relations between diseases and symptoms are extracted from the annotated corpus in advance. In this work, only the first T /2.5 4 symptoms with high frequency are kept for each disease so that the rule-based agent could inform a disease within the max dialogue turn T . Table In 2003, an ontology-based dialogue system that supports electronic referrals for breast cancer is proposed In this paper, we propose a reinforcement learning based framework of dialogue system for automatic diagnosis and build a dataset for training DS which is derived from the dialogue text between real patients and doctors. Experiment results on a selfconstructed dataset show that our dialogue system is able to collect additional symptoms via conversation with patients and improve the accuracy for automatic diagnosis. The relationship between diseases and symptoms is an external knowledge which is thought to be useful for the automatic diagnosis. One of our future directions is to explore models that can incorporate external knowledge for better policy learning. | 708 | 1,681 | 708 |
Learning to Ignore Adversarial Attacks | Despite the strong performance of current NLP models, they can be brittle against adversarial attacks. To enable effective learning against adversarial inputs, we introduce the use of rationale models that can explicitly learn to ignore attack tokens. We find that the rationale models can successfully ignore over 90% of attack tokens. This approach leads to consistent and sizable improvements (∼10%) over baseline models in robustness on three datasets for both BERT and RoBERTa, and also reliably outperforms data augmentation with adversarial examples alone. In many cases, we find that our method is able to close the gap between model performance on a clean test set and an attacked test set and hence reduce the effect of adversarial attacks. | Adversarial robustness is an important issue in NLP, asking how to proof models against confounding tokens designed to maliciously manipulate model outputs. As such models become more powerful and ubiquitous, research continues to discover surprising vulnerabilities A common defense method to combat adversarial attacks is adversarial training. Given knowledge of attack strategies, it constructs synthetic adversarial examples to augment clean examples during training In this study, we propose a simple yet effective adversarial training schema for additive attacks: explicitly training the model to ignore adversarial tokens. We do this by augmenting the underlying model with a rationale extractor extractor to ignore attacking tokens as an additional joint objective to overall label accuracy (Fig. In addition to training the extractor to distinguish the attacking/non-attacking token dichotomy, we also explore the utility of human-provided explanations in this regard. In doing so, we ask: does learning from human rationales help the model avoid attending to attacking tokens? Fine-tuning BERT Our main results are that rationale-style models learn to ignore these attacks more effectively than only with data augmentation, leading to an improvement of ∼10% in accuracy on attacked examples compared to baseline models and an advantage of 2.4% over data augmentation alone, mostly recovering clean test performance. While human explanations may potentially improve the interpretability of these models, they are of limited use in improving this defense even further. In summary, we offer three main contributions: • We show that explicitly training an extractive rationale layer to ignore attack tokens is more effective than implicitly training a model via data augmentation with adversarial examples. • We assess whether human-annotated rationales augment this defense, showing that they have only a limited benefit. • We conduct an in-depth error analysis of differences between models, explaining some of the patterns we observe in our main results. Our code is available at | We build on prior work on adversarial robustness and learning from explanations. Adversarial robustness. Adversarial attacks against NLP models seek to maliciously manipulate model output by perturbing model input. As interest in adversarial attacks has increased, so has interest in developing models robust to these attacks. A popular defense method is adversarial training via data augmentation, first proposed by An early work is In this paper, we focus on model robustness against the ADDSENT additive attack proposed by The key idea behind the ADDSENT attack is that the mutations alter the semantics of the query by mutating the named entities and numbers, so that the attack contains words or phrases that are likely confusing to the model without changing the true semantics of the input. An example of the ADDSENT attack is given above. The original approach includes an additional step of using crowdsourced workers to filter ungrammatical sentences. We do not have access to this manual validation process in all datasets. Occasionally, ADDSENT generates ungrammatical attacks but it nevertheless proves empirically effective in reducing the performance of our models. Datasets. To evaluate our hypotheses on learning to ignore adversarial attacks, we train and evaluate models on the Multi-Sentence Reading Comprehension (MULTIRC; In modeling these two datasets, we follow standard practice in appending the query to the end of the document with [SEP] tokens. We use train/validation/test splits prepared by the ERASER dataset collection Directly applying the synthetic ADDSENT attack to MULTIRC and FEVER leads to occasionally ungrammatical adversarial examples due to incorrectly applied conversion heuristic or errors in constituency parsing. To alleviate this concern, we further evaluate on SQUAD Our study assesses whether adding an explicit rationale extractor to a model and training it to ignore attack tokens results in a more effective defense than simply adding attacked examples to the training set. This comparison results in several combinations of model architectures and training regimes. We denote each training instance as (x, r, y): a text sequence x consisting of the concatenated document and query, a ground-truth binary rationale sequence r, and a binary label y. ... and 18 national cups. FC Bayern was founded in 1900 by 11 football players, led by Franz John. Although Bayern won ... European Cup three times in a row In the baseline adversarial training via data augmentation condition (denoted ADV.), we add ADDSENT-attacked versions of each training example to the training set on a one-to-one basis, allowing the model to train for the presence of such attacks. This represents a fairly standard baseline defense in the literature Following prior adversarial robustness literature The two components are trained together to optimize predicted label accuracy as well as loss associated with the predicted rationale. In an unsupervised scenario, this loss punishes the norm of the predicted rationale, encouraging sparsity on the (heuristic) assumption that a sparse rationale is more interpretable. In this study, we rather consider the supervised scenario, where we punish r's error with respect to a ground-truth rationale r. However, we find empirically that the rationale sparsity objective is useful in combination with the rationale supervision objective, leading to the following joint objective function using cross-entropy loss L CE with hyperparameter weights λ 1 and λ 2 : Adversarial training with rationale supervision. To introduce rationale supervision, we augment the training set with attacked examples on a one-to-one basis with original examples, similar to adversarial training. Moreover, we can change the groundtruth rationale to reflect the desired behavior for the model. We consider two options for this new ground-truth r: (1) a binary indicator of whether a token is adversarial or not (ADV. + ATK. SUP.), and (2) the human-annotated rationale (ADV. + HUMAN SUP.), which also filters adversarial tokens. Table Table Taken together, these conditions address our three research questions: (1) Is adversarial training via rationale supervision more effective than via attacked examples? (2) Does training the model to emulate human explanation make it intrinsically more robust to attacks? (3) Do human explanations improve upon adversarial training with non-attack tokens as rationale supervision? We start by describing our experimental setup and evaluation metrics. We then investigate model performance with different training regimes and conduct an in-depth error analysis. Our study compares whether rationale-style models are better at learning to explicitly ignore adversarial tokens than standard models via adversarial training. As we describe above, we train three variants of the standard classification model (NO ADV., ADV., ADV.-10X), and three variants of the rationale model (ADV. + ATK. SUP., HUMAN SUP., ADV. + HUMAN SUP.). Exploring these 6 architecture/training combinations for three datasets (MULTIRC, FEVER, and SQUAD) and two underlying models (BERT and RoBERTa), we report results from all trained models in Table Additionally, for the rationale models, we report the mean percentage of attack and non-attack tokens included in each predicted rationale, two metrics that help explain our accuracy results. The mean percentage of attack tokens included in the predicted rationale indicates the effectiveness of ignoring attack tokens: the lower the better. We focus our analysis on three questions: 1. Does adversarial rationale supervision on augmented data improve robustness over adversarial data augmentation alone? 2. Does human rationale supervision improve adversarial robustness over a standard model? 3. Does the addition of human rationales to adversarial training further improve robustness? Table Data augmentation with adversarial examples works, mostly. In almost all cases, data augmentation does result in improved performance on the attacked test set, improving +5.9% (FEVER) and +17.6% (SQUAD) for BERT, as well as +6.4% (MULTIRC), +9.7% (FEVER), and +9.4% (SQUAD) for RoBERTa. The exception is BERT on MULTIRC, where it causes a decrease of -1.0%. However, in only one case out of six does data augmentation with adversarial examples bring the model back to clean test performance (RoBERTa on MULTIRC, +0.3%). Surprisingly, BERT on MULTIRC is the only scenario where the ADV.-10X augmentation significantly improves attack accuracy (4.3% improvement over ADV.). In all the other cases, adding more adversarial examples does not improve robustness and even leads to a 3.5% drop in SQUAD for RoBERTa. This result demonstrates that BERT and RoBERTa may not learn from adversarial examples alone. Adversarial rationale supervision improves on adversarial training baselines in all cases. We see an improvement of +4.6% for BERT on MULTIRC, +2.9% for BERT on FEVER, +2.7% for BERT on SQUAD, +2.2% for RoBERTa on MULTIRC, +0.7% for RoBERTa on FEVER, and +1.0% for RoBERTa on SQUAD (2.4% on average). For the one case where adversarial data augmentation recovered clean test performance (RoBERTa on MULTIRC), adversarial rationale supervision actu- ally improves clean test performance by +2.5%. The effectiveness of ADV. + ATK. SUP. is even more salient if we compare with NO ADV. on attacked test: 3.6%, 8.8%, and 20.3% for BERT on MULTIRC, FEVER, and SQUAD, 8.6%, 10.4%, and 10.4% for RoBERTa on MULTIRC, FEVER, and SQUAD (10.4% on average). The above findings remain true even when we compare our methods against the theoretically stronger baseline of ADV.-10X, where the training dataset is augmented with 10 perturbed examples for every training example. Our models trained with adversarial rationale supervision outperforms ADV.-10X across all datasets and models, and our best model outperforms the ADV.-10X baseline by 3.3% on average. This result highlights both the efficiency and the effectiveness of our method: with adversarial rationale supervision, BERT and RoBERTa achieve greater defense against the ADDSENT attack using 10% of the ad-versarial examples. Interestingly, the adversarially-supervised rationale model demonstrates a strong ability to generalize knowledge learned from synthetic attacks to tune out human-rewritten attacks (+20.3% on SQUAD; recall we do not have human-rewritten attacks during training), indicating the potential of our method in a real-world scenario. Table Table This result may be explained by the fact that human rationales for these datasets identify the part of the document that pertains particularly to the query, while the ADDSENT attack crafts adversarial content with a semantic resemblance to that same query. Hence, it is understandable that human rationale training would not improve robustness. Human and adversarial rationale supervision (ADV. + HUMAN SUP.). Although human rationales alone may not reliably improve model robustness, a final question is whether human rationales can serve as a useful addition to adversarial training. Does training the model to both ignore adversarial tokens and emulate human explanations further improve robustness against the ADDSENT attack? In two out of four cases, the performance of ADV. + HUMAN SUP. is equal to that of ADV. + ATK. SUP. Only for BERT on MULTIRC does ADV. + HUMAN SUP. result in an improvement, being the only configuration that brings performance back to that of clean test for that model, and dataset. For RoBERTa on MULTIRC, it actually weakens attacked test performance. While these results are mixed, Table Overall, our results suggest that human rationales have limited effect in defending against adversarial attacks, but can be important in developing sparse (and potentially interpretable) models. To better understand the behavior of the models, we examine mistakes from BERT compared to explicitly training a rationale extractor on MULTIRC. We start with a qualitative analysis of example errors, and then discuss general trends, especially on why human rationales only provide limited benefits over ADV. + ATK. SUP. More in-depth analyses can be found in the appendix for space reasons, including a Venn diagram of model mistakes. Qualitative analysis. We look at example errors of ADV. to investigate attacks that are confusing even after adversarial augmentation. Table Example 1 shows a case where models with explicit rationale extractors ignore attacks more effectively than ADV. In the attack sentence, "tete didn't stay in" is highly similar to the query, so a model likely predicts True if it uses the attack information. In comparison, both rationale models ignore the attack in label prediction, which enables them to make correct predictions. Example 2 demonstrates that ADV. + HUMAN SUP. makes mistakes when it fails to include crucial information in rationales while avoiding attack tokens. ADV. + HUMAN SUP. predicts the wrong label because it misses information for the number of friends in its rationale. ADV. + ATK. SUP. gets this example correct because it can both ignore the attack and include the necessary information. Finally, Example 3 shows an example where ADV. + HUMAN SUP. is better than ADV. + ATK. SUP. when generating rationales to ignore noises. ADV. + HUMAN SUP. includes attacks in rationale, but it is still able to predict the label because the attack is not confusing given the selected rationale. The generated rationale helps ADV. + HUMAN SUP. to avoid unnecessary information that may confuse the model. For example, the sentence with "picts" could confuse the model to predict True. On the other hand, ADV. + ATK. SUP. gets this example wrong, despite occluding all attack tokens. More generally, we find that ADV. + HUMAN SUP. tends to have high false negative rates. When ADV. + HUMAN SUP. fails to extract good rationales, it tends to predict False due to missing information from the rationale. In contrast, ADV. + ATK. SUP. rarely occludes necessary information, so it does not suffer from the same issue. ADV. + ATK. SUP. is better than ADV. + HU-MAN SUP. when human rationales are denser and passage length is longer (see Table In summary, these analyses highlight the challenges of learning from human rationales: it requires precise occlusion of irrelevant tokens while keeping valuable tokens, and must account for variance in human rationale and input lengths. These challenges partly explain the limited benefit of ADV. + HUMAN SUP. over ADV. + ATK. SUP. In this study, we find that adding an explicit extractor layer helps a model learn to ignore additive adversarial attacks produced by the ADDSENT attack more effectively than conventional adversarial training via data augmentation. This is an exciting result because it defeats an attack which is otherwise stubbornly effective against even copious adversarial data augmentation. It is a novel use for this type of explicit token relevance representation, which is more typically applied for model interpretability Our work focuses on improving model robustness by explicitly ignoring adversarial attacks. In this work, we only explore a known type of adversarial attack (ADDSENT), and the performance of our method against unknown attacks is yet to be validated. Since our method uses rationalization as the underlying mechanism for ignoring tokens, it would take non-trivial work to make our method compatible with attacks in the form of token removal and flipping. Finally, we limit our experiments to the domain of QA, where the ADDSENT attack is naturally applicable. Our work contributes to the line of research that focuses on improving the adversarial robustness of language models. We also explore novel ways to integrate human explanations into the training paradigm. We believe robustness to adversarial attacks is essential to the deployment of trustworthy models in the wild, and we hope this work brings current research a step closer to this objective. To avoid ethical concerns related to over-claiming results, we emphasize in both our concluding discussion and the limitations section that our work builds on the assumption that we know the type of attack and only experiments with ADDSENT. Furthermore, our approach tends to increase the computational cost compared to adversarial training both during training and inference. One should consider the tradeoff between robustness and computation. We use the HuggingFace In practice, we find it useful to pretrain the predictor layer f of the rationale model on full input before jointly training it with the extractor g. We observe that this trick stabilizes training and helps prevent mode collapse. In producing the predicted rationale, we automatically assign a 1 (indicating relevance) to every token in the query, so that they are always fully visible to the predictor and the effect of the extractor is in adjudicating which tokens of the document are used or ignored. Traditionally, this style of rationale model produces binary predicted rationales via either reinforcement learning However, we find that relaxing this binary constraint leads to better outcomes for adversarial training. Thus, our model produces predicted rationale r by passing predicted rationale logits ϕ through a sigmoid function. The masking function m we use is simply to multiplicatively weight x by predicted rationale r during training (we discretize r during testing), From a theoretical perspective, jointly optimizing the rationale extractor g and label predictor f should allow the model to predict rationale r that is more adapted to the predictor. Separately optimizing both components implies that the rationale extractor does not get penalized for poor label prediction performance, and often leads to predicted rationale that is closer to human rationale r. In our experiments, we include both training setups as a hyperparameter. For our experiments, we fine-tune both the rationale extractor g and predictor f for the rationale models from a pretrained language model. We finetune BERT components from a pre-trained bertbase-uncased model, and RoBERTa from a pretrained roberta-large model. We use an Adam optimizer with with β 1 = 0.9 and β 2 = 0.999 for all experiments. We find gradient accumulation helps with training stability of BERT and RoBERTa, and we report gradient accumulation as a hyperparameter for both models. Table We ran our experiments on a mix of RTX 3090, A30 and A40 GPUs. All experiments combined take less than 300 GPU hours. The rationale model has about two times the parameters of its base model. Due to limited computational resources and a large number of experiment conditions, our experiments are not repeated across multiple random seeds. To verify the statistical significance of improvements of the top-performing rationale models against the strongest baselines, we report Wilcoxon signedrank test results in Table Easy examples have high jaccard similarity between human rationale and QUERY+ANSWER. areas and learns to ignore attacks from training augmentation. BERT rationale models handle denser human rationale slightly better than BERT (ADV.). We define sparsity of X as the number of tokens in X divided by the total number of tokens in the input, so larger sparsity correspond to dense rationales. Counter-intuitively, all three models are bad at examples with the most dense human rationale. This can be accounted for by the fact that these are also examples where QUERY+ANSWER and human rationale have the least jaccard similarity: human rationale sparsity and the jaccard similarity has a Pearson's coefficient of 0.25 (p < 0.001). Thus, examples with denser human rationale are likely to contain confusing information for models. We find BERT rationale models can resist this confusion better than BERT (ADV.). For instance, human rationale sparsity = 0.167 when human-supervised BERT is correct bu BERT is wrong, and it is 0.165 when BERT is correct but BERT rationale is wrong. In Table Surprisingly, the synthetic attack ADDSENTS is more effective than human generated ADDSENTH prior to adversarial training. Since the ADDSENT attack works by mutating the query and adding a fake answer, the synthetic attack often appears syntactically similar to the query. On the other hand, human generated attacks in ADDSENTH often fits more naturally in the document and grammatically correct, but does not mirror the structure of the query. For a model that solves the QA task by simply looking for the best match of the query inside a document while skipping complex reasoning, it's conceivable that ADDSENTS leads to the greatest performance drop. Since ADDSENTH and ADDONESENTH are attack examples re-written and filtered by humans, we use them as a proxy for understanding the model behavior in a real-world setting. We find the BERT Rationale model with attack rationale supervision significantly outperforms the BERT Classification baseline trained with adversarial augmentation (+2.7% on ADDSENTH, +2.2% on ADDONE-SENTH). Similar to findings in §5.2, we observe attack rationale supervision (ADV. + ATK. SUP.) as a more effective adversarial training method than adversarial data augmentation (ADV.). It is worth noting that the despite training on the synthetic attacks, the rationale model demonstrates strong ability to generalize knowledge learned from synthetic attacks to tune out human-rewritten attacks, which explains the strong performance on ADDSENTH and ADDONESENTH. An apparent anomaly in Table | 750 | 2,088 | 750 |
Directions for NLP Practices Applied to Online Hate Speech Detection | Addressing hate speech in online spaces has been conceptualized as a classification task that uses Natural Language Processing (NLP) techniques. Through this conceptualization, the hate speech detection task has relied on common conventions and practices from NLP. For instance, inter-annotator agreement is conceptualized as a way to measure dataset quality and certain metrics and benchmarks are used to assure model generalization. However, hate speech is a deeply complex and situated concept that eludes such static and disembodied practices. In this position paper, we critically reflect on these methodologies for hate speech detection, we argue that many conventions in NLP are poorly suited for the problem and encourage researchers to develop methods that are more appropriate for the task. | Online hate speech is the cause for growing concern, due to its social impacts The increase in research interest to hate speech detection has spurred on a growth and variety in annotated resources for the task created within the academy and industry. However, at the same time, critical work on hate speech detection has found that there are significant challenges related to the published research outcome with respect to the construction of data In contrast to prior work, which has sought to address specific shortcomings of machine learning algorithms for hate speech detection (e.g., More specifically, we a) reflect on frequently used conventions for classification of text and explore their inadequacies with reference to hate speech detection; and b) discuss the future of current methodologies for this task. On the basis of our analysis, we conclude that current models are incapable of detecting hate speech without harms to marginalized communities. We therefore call for the scientific community to adapt NLP methodologies such that future developments center the impacts that used methodologies may have on marginalized communities. We believe that by critically reflecting on the potential real-world impacts of the methodologies for hate speech detection on marginalized communities, the scientific community can come to identify methodologies that result in more just futures. | Hate speech detection is commonly conceptualized as a supervised classification task, with the goal to determine whether content is hateful or not Defining hate speech is to control the discourse surrounding the phenomena; determine which groups are minoritized, and therefore should be protected; and which patterns of speech should be sanctioned Moreover, hate speech is often categorized under the umbrella terms such as "abusive language", "offensive language", or "toxicity" Furthermore, most NLP research exclusively considers textual material, assuming that it provides adequate information. However, hate speech is deeply tied to oppression and it is therefore necessary to understand the speaker and listener's subjectivities to situate the text and adjudicate whether it constitutes hate speech. More often than not, this information is unavailable from the text. A common convention in labeling a dataset is to use an odd number of annotations for each text sample. The reliability of the labels in a dataset is often measured by computing the Inter-Annotator Agreement (IAA). In this section, we discuss biases in annotations and the paradoxical search for ground truth within disagreement. Socially biased systems are a growing concern within NLP The annotation challenge is further aggravated by the absence of widely agreed-upon annotation criteria The selection of annotators is another source of social biases The goal for annotation efforts in NLP is to assign a gold label to data (e.g., a document or an entity therein) Once a dataset has been labeled, models can be trained and evaluated. To assess model performance and generalizability, the trained models are evaluated on held-out test sets Although contemporary machine learning models often show an impressive performance when applied to different NLP tasks, they have been criticized for failing to grasp pragmatics due to their reliance on the distributional hypothesis Recent work on quantitative benchmarks has questioned the ability of contemporary methods to measure generalization in machine learning One reason models may be vulnerable to adversarial attacks is that they over-fit to tokens and token interaction patterns instead of learning to generalize the concept of hate speech We identified the following three factors that make the I.I.D. assumption unlikely to hold, resulting in models that are unlikely to generalize: 1) Given the variety of concepts and definitions in hate speech, it is very hard to assure that the different samples express the same flavor of the phenomena; 2) due to the fact that hate speech only occurs very rarely in random samples 3 Discussion: On the Present and Future for Hate Speech Detection In the preceding sections, we have highlighted a number of challenges and ethical concerns that arise from the current conceptualization of hate speech detection. We argue that these limitations render current models unable to detect hate speech without significant risk to minorities. Specifically, classifiers that are unable to accurately classify content directed towards marginalized communities risk increasing the costs for said communities to participate in online spaces, due to the increased risks of being subject to hate speech whilst also remaining unprotected by hate speech detection systems Overcoming the identified challenges will require shifting our research practices. In this section, we propose new directions for hate speech detection. However, we do not expect that implementing any individual solution in isolation will result in ready to use classifiers. We therefore emphasize the need for research to continuously reassess the risks that arise from methodological innovations for hate speech detection. Accounting for Plurality of Hate Speech While contemporary methods for annotating hate speech imply the assumption that there is a universal definition of hate speech, and that models derived from labeled data are applicable across all contexts, we argue instead for a pluralist approach to annotation. By taking a pluralist approach, e.g. through situating models within subjective contexts, researchers are afforded the ability to view hate speech as contextual to the subjectivities of the target of hate. For instance, by narrowing down definitions of hate, clearly providing the geographical and cultural contexts, and specifying the values and goals for the model, researchers can clearly articulate within which contexts models and data are valid and which particular groups models seek to protect. Such model framing can help address the issues surrounding universality and can provide space for researchers to consider how their choices have political implications for what speech is sanctioned. Accounting for Context Supervised machine learning models for hate speech primarily operate on text, and a single label for each document during training. However, whether a text amounts to hate speech is highly context dependent Representative Sampling Procedures Given the sampling methods used to ensure an adequate distribution of hate speech for labeling, models are often trained on data distributions that significantly vary from real-world occurrences of hate speech. To address this concern, future data collection efforts should seek to minimize such distributional differences whilst taking into account wider notions of conversational contexts. Handling Classification Errors For many NLP tasks classification errors do not have immediate harms. For hate speech detection, classification errors can result in significant immediate harms to people. False negatives can result in hateful speech being passed as acceptable which can allow harmful content to remain unsanctioned In this paper, we have argued that current NLP practices for hate speech detection are unlikely to address the core concerns of hate speech detection, i.e. identify hate with minimal errors and protect marginalized communities. We therefore call for the NLP community to rethink its methodologies such that future developments reduce risk for marginalized communities. One avenue for future work is to follow the principles of design justice (Costanza-Chock, 2018), which emphasizes community inclusion and ownership of (technological) solutions. Following the principles of design justice, researchers would decenter their own expertise in favor of the lived expertise of affected communities. We strongly believe that future steps must center a multidisciplinary approach in close communication with affected groups. By taking steps to document and address the limitations of contemporary methods for hate speech, researchers can identify new avenues for improving hate speech detection models. Moreover, researchers can take steps towards ensuring that content moderation technologies provide safer online spaces for marginalized communities by mitigating the prevalence of online hate speech. This study is intended as a theoretical consideration of the issues that arise in hate speech detection. The study analyzes the current limits of using machine learning infrastructures for the identification and moderation of hate speech. One limitation of the work is the theoretical frame of our work. While our frame allows for more deeply understanding the issues of contemporary methods for hate speech detection, deeper considerations of sociological and anthropological methods can afford significant improvements in our understanding of NLP technologies, such as hate speech detection, as socio-technical systems and their social impacts. A further limitation of our work is its focus on research rather than application, and therefore does not discuss how classification models are used in real-world content moderation applications. This is left to future work. In this section, we provide examples of sentences classified by a state of the art model for hate speech detection (i.e. the Dynabench classifier proposed by When we keep the word "women" static and change the remaining sentence to something inoffensive, we see that the model performs better which indicates a correct distinction between minority terms and hate speech. We continue with new examples and introduce replacements to the word "kitchen". When replaced by "school", the model incorrectly predicts that it is hateful. This further evidences the model's inability to distinguish prescriptions of actions from hate. More concerning is the model's inability to correctly relate the implication of femicide, when evaluated on the text "Women's place is in the cemetery." While this further provides further evidence that the model does not learn a latent understanding of power dynamics, it also illustrates that the model may not provide adequate protections against violent speech towards women. In our second set of examples (examples 2.1-2.3), we examine how the model responds more broadly to conversations around power dynamics. We see here that the explicit mentioning of gender and race prompts incorrect predictions from the model, i.e., that the mere mention of comparative privilege is labeled as hateful. Should this model be deployed, it would actively limit conversations around race, gender, and power dynamics more broadly. Such conversations are frequently had by communities that are marginalized, in efforts to identify, discuss, and seek to remedy their own marginalization. That is, the model would censor conversations that are necessary to have, in order for society to progress beyond contemporary forms of marginalization, thereby actively limiting movements for social progress. In our final three examples, we see that the model makes incorrect predictions for all three cases. In the latter two cases, we see further evidence that the model does not link notions of sexism and fascism with their expressed goals of marginalization. We acknowledge that the NLP community is working towards identifying shortcomings of the current research practices, e.g., by studying how to learn from disagreements Here, we provide a brief summary of these efforts, which can also serve as a source of ideas for future approaches to the problem. To counteract the lack of contextual information, the latest developments have added information to single text samples, including the conversation threads Some prior work has sought to evaluate the quality of keyword-based data collection. For instance, Recent work has sought alternatives to IAA and gold standards for general applications of machine learning Prior work has addressed the question of models over-fitting to tokens and spurious correlations in data (e.g. | 800 | 1,393 | 800 |
A Simple and Effective Usage of Word Clusters for CBOW Model | We propose a simple and effective method for incorporating word clusters into the Continuous Bag-of-Words (CBOW) model. Specifically, we propose to replace infrequent input and output words in CBOW model with their clusters. The resulting cluster-incorporated CBOW model produces embeddings of frequent words and a small amount of cluster embeddings, which will be fine-tuned in downstream tasks. We empirically show our replacing method works well on several downstream tasks. Through our analysis, we show that our method might be also useful for other similar models which produce word embeddings. | Word embeddings have been widely applied to various natural language processing (NLP) tasks. These embeddings can be pretrained on a large corpus and carry useful semantic information. One of the most well-known methods for obtaining word embeddings is based on Continuous Bag-of-Words (CBOW) In this paper, we focus on incorporating word clusters into CBOW model. Each word cluster consists of words that function similarly. By aggregating such words, we can alleviate data sparsity, even though each of those words is infrequent. In the past few years, word clusters have been applied to various tasks, such as named-entity recognition In our method, we keep only very frequent words and replace the other words with their clusters for both input and output words in the CBOW model. This is motivated by the fact that word clusters are more reliable than infrequent words. Thus, only very frequent word embeddings and a small amount of cluster embeddings are produced as the output. When fine-tuning the trained embeddings on downstream tasks, the embeddings of infrequent words within one cluster are initialized by the embedding of their cluster to increase the coverage of pretrained word embeddings. Since word embeddings are usually trained on the large-scale dataset. For making clusters on the large-scale dataset, we choose bidirectional, interpolated, refining, and alternating (BIRA) predictive exchange algorithm | A number of related research efforts have been done to help to learn better word embeddings aiming at different aspects. For example, There have also been some previous researches that utilized word clusters for reducing the number of word embeddings. 3 Our Method Let w t denote the t-th word in a given text. We adopt the basic CBOW model architecture for learning word embeddings. The CBOW model predicts the output word w t given the input words in the window which precede or follow the output word. When the window size is 2, as an example, the input words are w t-2 , w t-1 , w t+1 , w t+2 . We denote the input and output embeddings of word w i respectively as x i and o i . The CBOW model computes the hidden representation as follows: where c is the window size. We use negative sampling where k is the size of the negative sample, o j is the j-th noise word embedding and σ is the sigmoid function. Each word in the negative sample is drawn from the unigram distribution. As a method for incorporating word clusters, we propose to replace infrequent words with their clusters for the input and output. The architecture is shown in Figure • ReIn: In the input, x t+i in Eq. (1) will be replaced with d t+i if the frequency of w t+i is less than threshold f in . • ReOut: In the output, output words whose frequency is less than f out are replaced with their clusters. Thus, in negative sampling, a noise word will be sampled from clusters and frequent words. As with the standard CBOW model, we use the input word embeddings and input cluster embeddings for downstream tasks. Thresholds f in and f out are set to 100 in all experiments. Due to this large value, each cluster contains many infrequent words, which share the same embedding. We use two methods together, which is referred to as ReIn+ReOut in the following experiments. Since the embeddings of clusters are learned by aggregating many infrequent words, they are more robust than the embeddings of the infrequent words. During the fine-tuning process for a downstream task, the embeddings of infrequent words are first initialized with the embeddings of their clusters. As most of these infrequent words appear only a few times, these embeddings will not be updated far away from each other within one cluster. The visualization of these embeddings before and after fine-tuning can be found in the appendix B. As a result, these embeddings for infrequent words become more reliable since originally most infrequent word embeddings are updated only several times and are not far away from where they were randomly initialized. Since the context of frequent words becomes less noisy by replacing all the infrequent words with their clusters, the learned frequent word embeddings are also better, as shown later in our experiments. The standard CBOW model is usually trained with negative sampling, which is designed for speeding up the training process. By using ReOut, infrequent noise words will be replaced with their clusters, contain more noise words than the original CBOW model. As a result, ReOut makes the training of the CBOW model more effective, as shown later in our experiments. We applied our embeddings to downstream tasks: language modeling (LM) and low-resource machine translation (MT). When applying to the downstream tasks, we only used the training data of the specific task to obtain word clusters and embeddings without any extra data. We then used the learned embeddings to initialize the lookup table of word embeddings for the task. In this paper, we limit the applications of our model to relatively small datasets to demonstrate the usefulness of our method. We plan to conduct larger-scale experiments on more downstream tasks in future work. In the following tables, CBOW and ReIn+ReOut indicate that they are initialization methods for specific downstream tasks. In this section, we describe the hyper-parameters for producing word clusters and word embeddings. As we mentioned before, we obtained word clusters through the ClusterCat software. For most hyperparameters, we used its default values. We set the number of clusters to 600 in all our experiments. Since our work involves many tasks in total, it is hard to choose the optimal number of word clusters for each task. We experimented with several values (600, 800 and 1000) and observed the same trend. Thus, we simply chose 600, for convenience, for all tasks. For producing word embeddings, our implementation was based on the fasttext We test ReIn+ReOut based on the recent state-ofthe-art awd-lstm-lm codebase We applied our method to the standard long-short term memory networks (LSTMs) based sequenceto-sequence (seq2seq) model on two datasets: German-English (de-en) with 153K sentence pairs from IWSLT 2014 To verify the effect of word clusters on different languages, we selected 8 datasets containing typologically diverse languages from LM datasets released by We chose the available standard LSTM-LM code In this section, we analyse ReIn+ReOut on the basis of LM experiments with en and de datasets. To show the gain for frequent and infrequent words, we measured the perplexity for frequent and infrequent words in the test data separately. Specifically, we calculated the perplexity of the next word, when an infrequent word is given as the current word. A similar analysis on language models can be found in The results of ablation study are in this, we increased the number of negative samples for ReIn and CBOW. The training will be more effective if we increase the number of negative samples, while training the model will also take longer time. As we increased the size of negative samples, we obtained better results for both ReIn and CBOW. We increased it only to 30 because we did not observe improvements when we made it further larger. This result indicates that we can use word clusters to obtain better results with a small amount of negative samples. In reality, we can also use off-the-shelf word clusters to avoid spending time for producing word clusters. We proposed a simple and effective method to incorporate word clusters into the CBOW model. Our method is effective on several downstream tasks. For future work, we will test our methods on larger corpora and also add more downstream tasks. We will also study how to combine word clusters and subword information. We first applied ClusterCat to the preprocessed corpus to obtain word clusters and then produced cluster-incorporated word embeddings with ReIn+ReOut. The results are shown in Table We visualize word embeddings using t-SNE projections. Specifically, we randomly chose 15 clusters and all frequent words from en and visualize frequent and infrequent word embeddings in these 15 clusters in Figure | 600 | 1,425 | 600 |
Aligning Medical Domain Ontologies for Clinical Query Extraction | Often, there is a need to use the knowledge from multiple ontologies. This is particularly the case within the context of medical imaging, where a single ontology is not enough to provide the complementary knowledge about anatomy, radiology and diseases that is required by the related applications. Consequently, semantic integration of these different but related types of medical knowledge that is present in disparate domain ontologies becomes necessary. Medical ontology alignment addresses this need by identifying the semantically equivalent concepts across multiple medical ontologies. The resulting alignments can then be used to annotate the medical images and related patient text data. A corresponding semantic search engine that operates on these annotations (i.e. alignments) instead of simple keywords can, in this way, deliver the clinical users a coherent set of medical image and patient text data. | As the content of numerous ontologies in the biomedical domain increases, so does the need for sharing and reusing this body of knowledge. Often, there is a need to use the knowledge from multiple ontologies. This is particularly the case within the context of medical imaging, where a single ontology is not enough to support the necessary heterogeneous tasks that require complementary knowledge about human anatomy, radiology and diseases. Medical imaging constitutes the context of this work, which lies within the Theseus-MEDICO The Theseus-MEDICO use case has the objective of building the next generation of intelligent, scalable, and robust search engine for the medi-cal imaging domain. MEDICO's proposed solution relies on ontology based semantic annotation of the medical image contents and the related patient data. Semantic annotation of medical image contents and patient text data allows for a mark-up with meaningful meta-information at a higher level of granularity that goes beyond simple keywords. Therefore, the data which is processed and stored in this way can be efficiently retrieved by a corresponding search engine such as the one envisioned in MEDICO. The diagnostic analysis of medical images typically concentrates around three questions To satisfy the radiologist's information need, this scattered knowledge has to be gathered and integrated from disparate ontologies, in particular from those about human anatomy, radiology and diseases. Subsequently, the medical image contents and the related patient data have to be annotated with this information (i.e. ontology concepts and relationships) rather than the single elements from independent ontologies. Three ontologies that address the three questions above are relevant to gather the necessary knowledge about human anatomy, radiology and diseases. These are the Foundational Model of Anatomy Given this context, the semantic integration of these ontologies as knowledge sources becomes critical. Ontology alignment addresses this requirement by identifying semantically equivalent concepts in multiple ontologies. These concepts are then made compatible with each other through meaningful relationships. Hence, our goal is to identify the correspondences between the concepts of different medical ontologies that are relevant to the medical image contents. The rest of this paper is organized as follows. In the next section we explain the motivation behind aligning the medical ontologies. Section 3 discusses related work in ontology alignment in general and in the biomedical domain. In section 4 we introduce our approach and explain why it goes beyond existing methods. Here we also explain the application scenario, which exhibits how aligned medical ontologies can contribute to the identification of relevant clinical search queries. Section 5 introduces the materials and methods that are relevant for this work. Finally 6 and 7 discusses the planned evaluation and presents the roadmap for the remaining work, respectively | The following scenario illustrates how the alignment of medical ontologies facilitates the integration of medical knowledge that is relevant to medical image contents from multiple ontologies. Suppose that we want to help a radiologist, who searches for related information about the manifestations of a certain type of lymphoma on a certain organ, e.g. liver, on medical images. As discussed earlier the three types of knowledge that serves him would be about the human anatomy (liver), the organ's location in the body (e.g. upper limb, lower limb, neighboring organs etc.) and whether what he sees is normal or abnormal (pathological observations, symptoms, and findings about lymphoma). Once we know what the radiologist is looking for we can support him in his search in that we present him an integrated view of only the liver lymphoma relevant portions of the patient health records (or of that patient's record), PubMed abstracts as reference resource, drug databases, experience reports from other colleagues, treatment plans, notes of other radiologists or even discussions from clinical web discussion boards. From the NCI Thesaurus we can obtain the information that 'liver lymphoma' is the synonym for 'hepatic lymphoma', for which holds: 'hepatic lymphoma' 'disease_has_primary_anatomic_site' 'liver' 'hematopoietic and lymphatic system' 'gastrointestinal system' With this information we can now move on to the FMA to find out that 'hepatic artery' is a part of the 'liver' (such that any finding that indicates lymphoma at the hepatic artery would also imply the lymphoma at the liver). RadLex on the other hand informs that 'liver surgery' is a 'treatment' 'procedure'. Various types of this 'treatment' 'procedure' are 'hepatectomy', 'hepatic lobectomy', 'hepatic segmentectomy', 'hepatic subsegmentectomy', 'hepatic trisegmentectomy' or 'hepatic wedge excision', which can be used for disease treatment. Consequently, the radiologist who searches for information about liver lymphoma is presented with a set of patient health records, PubMed abstracts, radiology images etc. that are annotated using the terminology above. In this way, the radiologist's search space is reduced to a significantly small portion of the overdose of information available in multiple data stores. Moreover, he receives coherent data, i.e. images and patient text data that are related to each other, from a single access point without having to login to several different data stores at different locations. Ontology alignment is commonly understood as a special case of semantic integration that concerns the semi-automatic discovery of semantically equivalent concepts (sometimes also relations) across two or more ontologies. There are two commonly adopted approaches to ontology alignment; schema-based and instance-based, where most systems use both. Accordingly, the input of the former approach is the ontology schema only, whereas the input of the latter is the instance data i.e. the data that have been annotated with the ontology schema. Both approaches take advantage of linguistic and graph-based methods to help identify the correspondences. The most recent and comprehensive overview of work ontology alignment in general is reported by Ontology alignment is an increasingly active research field in the biomedical domain, especially in association with the Open Biomedical Ontologies (OBO) The focus of the work reported by On the medical imaging side, there are activities that concentrate around ImageClef Here, we describe our approach for the alignment of medical ontologies and outline the contributions of this thesis. In this respect, we first specify the general requirements for medical ontology alignment, which are then addressed by our approach. These are followed by the statement of the hypotheses of this work. Secondly, the materials that are relevant for this work are introduced. In particular, we describe the semantic resources and our domain corpora. Finally, an application scenario is described that exhibits the benefits of aligning medical ontologies. We describe this scenario as 'Clinical Query Extraction' and explain the idea behind. Drawing upon our experiences with the medical ontologies along the MEDICO use case we have identified some of their common characteristics that are relevant for the alignment process. These can be summarized as: 1. Generally, they are very large models. 2. They have extensive is-a hierarchies up to ten thousands of classes, which are organized according to different views. 3. They have complex relationships, where classes are connected by a number of different relations. 4. Their terminologies are rather stable (especially for anatomy) in that they should not differ much in the different models. 5. The modeling principles for them are well defined and documented. Based on these characteristics and the general requirements of the MEDICO use case, we de-rived the following requirements specifically for aligning medical ontologies: Linguistic processing: Medical ontologies are typically linguistically rich. For example, the FMA contains concept names as long as 'Anastomotic branch of right anterior inferior cerebellar artery with right superior cerebellar artery'. Such long multi-word terms are usually rich with implicit semantic relations. This characteristic shall be exploited by an intensive use of linguistic alignment methods. Use of external resources: As we are in a specific domain (medicine) and as we are not domain experts, we are in lack of domain knowledge. This missing domain knowledge shall be acquired from external resources, for example UMLS. Synonymy information in this resource and in other terminological resources is of particular interest. Non-machine learning approach: We do not have access to much instance data. This is partly because we are domain dependent. A more important reason, however, is that the special resource, the patient health records, which would provide a large amount of relevant instance data is very difficult obtain due to legal issues. Therefore, machine learning approaches, which require large portions of training data are not the optimal approach for our purposes. Structural matching: Medical ontologies typically come with rich structures that go beyond the basic is-a hierarchy. Most of them include a hierarchical ordering along the part-of hierarchies. Ontologies such as FMA additionally have part-of classification with higher granularity that include relations such as 'constitutional part-of Sequential matching: Medical ontologies are complex, so that their automatic processing is usually expensive. Therefore, a target concept will be identified (this target concept/term will be in practice the search query of the clinician. More details are explained under section 6.2) First lexical matching techniques shall be applied to identify the search query relevant parts of the ontologies. In other words, those concepts that lexically match the query shall be aligned as first. In this way, the lexical match acts as a filter on the medical ontology and decreases the amount of the computation necessary. Given this context, we focus on the evaluation of the following hypotheses: 1. Valid relationships (equivalence or other) exist between concepts from FMA, RadLex and from NCI. 2. Relationships between non-identical concept labels from the three ontologies can be discovered if these have common reference in a more general medical ontology. 3. Concept labels in these ontologies are most often in the form of long natural language phrases with regular grammars. Meaningful relationships (e.g. synonymy) across the three ontologies can be derived by processing these labels using transformation grammars. 4. Identification of medical image related query patterns (i.e. a certain combination of concept labels and relations) from corpora is more efficient when it is done based on the alignments. The ontology alignment approach proposed in this thesis has three main aspects. It suggests a combinatory strategy that is based on (a) the linguistic analysis of the ontology concept labels (the linguistic aspect), (b) on corpus analysis (context information aspect) and (c) on humancomputer interaction e.g. relevance feedback (user interaction aspect). The linguistic aspect draws on the observation that concept labels in medical ontologies (especially those about human anatomy) often contain implicit semantic relations as discussed by Transformation grammars can help here to detect the syntactic variants of the ontology concept labels. In other words, with the help of rules, the concept labels can be transformed into semantically equivalent but syntactically different word forms. For example, one concept label from the FMA and its corresponding commonly observed pattern (in brackets) is: 'Blood in aorta' (noun preposition noun) Using a transformation rule of the form, noun1 preposition:'in' noun2 => noun2 noun1 we can generate a variant as below with the equivalent semantics: 'aorta blood' (noun noun) This is profitable for at least two reasons. Firstly, it can help resolve possible semantic ambiguities (if one variant is ambiguous the other one can be preferred). Secondly, identified variants can be used to compare linguistic (textual) contexts of ontology concepts in corpora leading to the second aspect of our approach. Subsequently, the second aspect, the corpus analysis, builds on comparing linguistic (textual) contexts of ontology concepts in corpora and it assumes that concepts with similar meaning (originating from different ontologies) will appear in similar linguistic contexts. Here, the linguistic context of an ontology class (e.g. 'terminal ileum' from the FMA as in the example be- low) can be defined as the document in which it appears, the sentence in which it appears and a window of size N in which it appears. For example, a window size -5, +5 for the FMA concept "terminal ileum" would be: can be represented as a vector in form of: <token -5, token -4, … , token +4, token +5> <focal, lymphoid, hyperplasia, of, the, presenting, mantle, zone, hyperplasia, with> These vectors can then be pairwise compared, where most similar vectors indicate similar meaning of corresponding ontology concepts and alignment between ontology concepts follows from this. Finally, with the user interaction aspect we understand dynamic models of the ontology integration process. Within this dynamic process the ontology alignment happens during an interactive dialogue between the user and the system. In this way, clarifications and questions that elicit user's feedback support the ontology alignment process. An example interactive dialogue can be: (1) Radiologist: Show me the images of Ms. Jane Doe, she has "Amyotrophic Lateral Sclerosis" (NCI Cancer Thesaurus concept) (2) System: Ms. Doe doesn't have any images of "Amyotrophic Lateral Sclerosis". Is it equivalent to "Lou Gehrig Disease" (equivalent NCI Cancer Thesaurus concept) or to "ALS" (equivalent RadLex concept)? That attacks the neurons i.e. the nerve cells (FMA concept) Stephan Hawkins has it. (3) Radiologist: Yes, that is true. (4) System Ok. ALS is a kind of "Neuro Degenerative Disorder" (super-concept from RadLex) Do you want to see other images on Neuro Degenerative Disorders? This dialogue illustrates a real life question answering dialogue; where the utterances ( Foundational Model of Anatomy (FMA) is the most comprehensive machine processable resource on human anatomy. It covers 71,202 distinct anatomical concepts and more than 1.5 million relations instances from 170 relation types. The FMA can be accessed via the Foundational Model Explorer FMA also provides synonym information (up to 6 per concept), for example one synonym for 'Neuraxis' is the 'Central nervous system'. Because single inheritance is one of the modeling principles used in the FMA, every concept (except for the root) stands in a unique is-a relation to other concepts. Additionally, concepts are connected by seven kinds of part-of relationships (e.g., part of, constitutional part of, regional part of). The version we currently refer to is the version available in August 2008. The Radiology Lexicon (RadLex) is a controlled vocabulary developed and maintained by the Radiological Society of North America (RSNA) for the purpose of uniform indexing and retrieval of radiology information, including images. RadLex contains 11962 terms related to anatomy pathology, imaging techniques, and diagnostic image qualities. RadLex terms are organized along several relationships hence several hierarchies. Each term will participate in one of the relationships with its parent. Synonym information is given whenever it is present such as in 'Schatzki ring' and 'lower esophageal mucosal ring'. Examples of radiology specific relationships are 'thickness of projected image' or 'radiation dose'. The National Cancer Institute Thesaurus (NCI) provides standard vocabularies for cancer research. It covers around 34.000 concepts from which 10521 are related to Disease, Abnormality, Finding, 5901 are related to Neoplasm, 4320 to Anatomy and the rest are related to various other categories such as Gene, Protein, etc. The ontology model is structured around three components i.e. Concepts, Kinds and Roles. Concepts are represented as nodes in an acyclic graph, Roles are directed edges between the nodes and they represent the relationships between them. Kinds on the other hand are disjoint sets of concepts and they constrain the domain and the range of the relationships. Each concept belongs to only one Kind. Except for the root concept, every other concept has at least one is-a relationship to another concept. Every concept has one preferred name (e.g., 'Hodgkin Lymphoma'). Additionally, 1,207 concepts have a total of 2,371 synonyms (e.g., Hodgkin Lymphoma has synonym 'Hodgkin's Lymphoma', The Wikipedia anatomy, radiology and disease corpora have been constructed based on the Anatomy 16 , Radiology 17 and Diseases 18. sections of the Wikipedia. Patient records would be the first choice, but due to strict anonymization requirements they are difficult to compile. Therefore, as an initial resource we constructed the corpora based on the Wikipedia. To set up the three corpora the related web pages were downloaded and a specific XML version for them was generated. The text sections of the XML files were run through the TnT part-ofspeech parser The PubMed lymphoma corpus is set up to target the specific domain knowledge about lymphoma, a special type of cancer (one major use case of MEDICO is lymphoma). Thus, the lymphoma relevant subterminology from the NCI Thesaurus was extracted. This subterminology includes information about lymphoma types, their relevant thesaurus codes, synonyms, hyperonyms (or parent terms) and the corresponding thesaurus definitions. Using the lymphoma terminology, we identified from PubMed an initial set of most frequently reported lymphomas, e.g. the top five is 'Non-Hodgkin's Lymphoma', The clinical questions corpus consists of health related questions asked among the medical experts and that were collected during a scientific survey. These questions (without answers) are available through the Clinical Questions Collection 20 online repository. It can either be searched or browsed, for example, by a specific disease category. An example question from the Clinical Questions Collection is "What drugs are folic acid antagonists?" For each question, additional information about the expert asking the question, e.g. time, purpose etc. are encoded. To create the clinical questions corpus we downloaded the categories Neoplasms as well as Menic and Lymphatic Diseases from the Clinical Questions Collection website. For each existing HTML page that reports on a question, we created a corresponding XML file. Currently there are 796 questions our questions corpus. The clinical discussions corpus is ongoing work and it will be a corpus, whose contents will be compiled from the various clinical discussion boards across the Web. These discussion boards usually contain questions and answers between and among the medical experts and patients. We expect the language to be less technical because of the user profile. The purpose of this corpus is to have a resource of clinical questions together guage from a wide range of sources, designed to represent a wide cross-section of current British English. 20 We distinguish between two kinds of evaluation techniques that can be applied to assess the quality of the alignments. Direct evaluation methods compare the results relative to human judgments as explained by Indirect evaluation methods, on the other hand, consider the performance of an application that uses the alignments. Hence, any improvement in the performance of the application when it uses the alignments can be attributed to the quality of the alignments. In the following two subsections we first describe the baseline and then explain the planned application that shall use the alignments. The performance of this application, with and without the alignments, will be taken a measure on the quality of these alignments. Our baseline for comparison is string matching after normalization on the concept labels from the input ontologies. Survey results The Ontology Alignment Evaluation Initiative We conceive of the clinical query extraction process as a use case that shows the benefits of semantic integration by means of ontology alignments. Clinical query extraction, Clinical query extraction is a technique to semi-automatically predict possible clinical queries without having to depend on clinical interviews. It requires domain corpora (i.e. disease, anatomy and radiology) and domain ontologies to be able to process statistically most relevant concepts in the ontologies and the relations that hold between them. Consequently, conceptrelation-concept triplets are identified, for which the assumption is that the statistically most relevant triplets are more likely to occur in clinical queries. Clinical query extraction can be viewed as a special case of term/relation extraction. Related approaches from the medical domain are reported by The identification of query patterns (i.e. the concept-relation-concept triplets) starts with the construction of domain corpora from related Web resources such as Wikipedia This statistical term/concept profiling can be viewed as a function that takes the domain (sub)ontologies and the corpora as input and returns the partially weighted domain ontologies as output, where the terms/concepts are ranked according to their weights. An example query pattern can look like: The clinical query extraction approach, as illustrated so far, builds on using domain ontologies, however on using them independently. That is, the entire statistical term profiling is based on processing the use case relevant terms (i.e. concepts) of the ontologies in isolation. In this respect the clinical query pattern extraction is a good potential application that can be used to evaluate the quality of the ontology alignments. As the current process is based on single concepts, the natural extension will be to perform the extraction based on aligned concepts. Any improvement in the identification of the query patterns from corpora can then be attributed to the quality alignments. Regarding the linguistic aspect of the ontology alignment approach, the next step will be to concentrate on the definition of the transformation grammar to generate the semantic equivalent concepts. A further consideration is to explore whether other relations beyond synonymy such as hyponymy or hyperonymy can also be generated and whether this is profitable. To accord for the second aspect, the most suitable vector model will be determined and tested and applied on the current corpora. As required by the third, user interaction aspect, a dialogue that is most representative of a real life use case will be modeled. Currently, some of the existing alignment frameworks, e.g. COMA++ 24 or PhaseLibs 25 are being tested for their performance with FMA, RadLex and NCI. The observations on the strengths and the weaknesses of these systems will give more insights for the requirements for our system. Other tasks that are relevant for achieving the goal of this thesis concentrate on two main topics; the collection and the preparation of data and 24 As required by the linguistic aspect of our approach an initial grammar will be set up and be continuously improved to detect the variants of the ontology concepts labels from the three ontologies mentioned earlier. Transformation rules will be used for this purpose. The open question about whether the ontology relations shall also be aligned will be investigated to determine the trade-offs of including vs. excluding them from the process. We consider using an external resource such as UMLS to obtain background knowledge that can help resolve possible semantic ambiguities. The appropriateness and adoptability of this resource will be assessed. Finally, the evaluation the overall ontology alignment approach will be carried out, whereby a possible participation the OAEI may also be considered. | 916 | 3,019 | 916 |
Training Adaptive Computation for Open-Domain Question Answering with Computational Constraints | Adaptive Computation (AC) has been shown to be effective in improving the efficiency of Open-Domain Question Answering (ODQA) systems. However, current AC approaches require tuning of all model parameters, and training state-of-the-art ODQA models requires significant computational resources that may not be available for most researchers. We propose Adaptive Passage Encoder, an AC method that can be applied to an existing ODQA model and can be trained efficiently on a single GPU. It keeps the parameters of the base ODQA model fixed, but it overrides the default layer-by-layer computation of the encoder with an AC policy that is trained to optimise the computational efficiency of the model. Our experimental results show that our method improves upon a state-of-theart model on two datasets, and is also more accurate than previous AC methods due to the stronger base ODQA model. All source code and datasets are available at https:// github.com/uclnlp/APE. | Open-Domain Question Answering (ODQA) requires finding relevant information for a given question and aggregating the information to produce an answer. The retriever-reader architecture, popularised by In this work, we explore an efficient approach to apply adaptive computation to large generative ODQA models. We introduce the Adaptive Passage Encoder (APE), a module that can be added to the encoder of an existing ODQA model, which has the following features: 1) it efficiently reuses the encoder's hidden representations for calculating the AC priorities; 2) it does not require tuning of the base model and hence allows efficient training under limited resource; 3) it does not require confidence calibration. Our experimental results on Nat-uralQuestions and TriviaQA show that our method improves the performance of the state-of-the-art model FiD | Open Domain Question Answering ODQA is a task that aims to answer a factoid question given a document corpus. Most works in this domain follow a retriever-reader design first proposed by However, thanks to recent advances in sequenceto-sequence pretrained language models Adaptive Computation Adaptive computation allows the model to condition the computation cost on the input. For example, In this section, we will introduce the base model and how our proposed adaptive passage encoder works with it. Large generative ODQA models where L is the number of encoder layers and N is the number of retrieved passages. We denote the hidden representation of the i-th passage at its j-th encoder layer as h j i . The decoder will attend to these hidden representations and generate the answer tokens sequentially. As shown in Fig. The HasAnswer model predicts the probability that a passage contains an answer to the question, given its hidden representation h j i . It first pools hidden representation h j i into a vector, then feeds the pooled representation to a multi-layer perceptron to produce the probability p j i . The scheduler is then responsible for the selection and prioritisation of passages that are likely to contain the answer To achieve this goal, the scheduler produces a priority score q n for each passage: where n is the passage rank by the retriever, l n is the index of its current encoder layer, g and f are two multi-layer perceptrons that learn the weight and bias respectively. Starting at the initial layer for all passages, the scheduler will select a passage with the maximum priority, forward one encoder layer for it l n = l n + 1, and updates its priorities q n with its new hidden representation h l n n and hasanswer probability p l n n . This process will iterate for B (budget) steps, and only k passages with the most layers computed are retained in the end. Differently from Datasets Following Evaluation Metrics Following Technical Details We use FiD Computational Feasibility Tuning a FiD-base model with k = 20 or a FiD-large model with k = 10 (batch size=1) would yield out-of-memory errors on a V100 (16GB) GPU. Hence, it is infeasible to train FiD with the previous AC method As shown in Table Previous adaptive computation methods To understand how APE outperforms the baselines, we analyse the quality of the final top-k passages retained by APE. Table In this work, we explore an adaptive computation method that can be efficiently applied to an existing generative ODQA model. We find that, by replacing the encoder of generative ODQA models with our proposed adaptive passage encoder, we can train an effective adaptive computation policy without tuning the base model. This allows applying adaptive computation to large state-of-the-art generative models, which was previously challenging computation-wise. Our experimental results show that our method produces more accurate results than a state-of-the-art generative model on both NaturalQuestions and TriviaQA, and it outperforms the previous AC method by a large margin. The analysis also shows that our approach achieves better passage quality that leads to improvements in ODQA performance. | 965 | 853 | 965 |
Learning to Translate in Real-time with Neural Machine Translation | Translating in real-time, a.k.a. simultaneous translation, outputs translation words before the input sentence ends, which is a challenging problem for conventional machine translation methods. We propose a neural machine translation (NMT) framework for simultaneous translation in which an agent learns to make decisions on when to translate from the interaction with a pre-trained NMT environment. To trade off quality and delay, we extensively explore various targets for delay and design a method for beam-search applicable in the simultaneous MT setting. Experiments against state-of-the-art baselines on two language pairs demonstrate the efficacy of the proposed framework both quantitatively and qualitatively. 1 | Simultaneous translation, the task of translating content in real-time as it is produced, is an important tool for real-time understanding of spoken lectures or conversations In this paper, we propose a unified design for learning to perform neural simultaneous machine translation. The proposed framework is based on formulating translation as an interleaved sequence of two actions: READ and WRITE. Based on this, we devise a model connecting the NMT system and these READ/WRITE decisions. An example of how translation is performed in this framework is shown in Fig. We evaluate the proposed method on English-Russian (EN-RU) and English-German (EN-DE) translation in both directions ( §6). The quantitative results show strong improvements compared to both the NMT-based algorithm and a conventional segmentation methods. We also extensively analyze the effectiveness of the learning algorithm and the influence of the trade-off in the optimization criterion, by varying a target delay. Finally, qualitative visualization is utilized to discuss the potential and limitations of the framework. | Suppose we have a buffer of input words X = {x 1 , ..., x Ts } to be translated in real-time. We define the simultaneous translation task as sequentially making two interleaved decisions: READ or WRITE. More precisely, the translator READs a source word x η from the input buffer in chronological order as translation context, or WRITEs a translated word y τ onto the output buffer, resulting in output sentence Y = {y 1 , ..., y Tt }, and action sequence A = {a 1 , ..., a T } consists of T s READs and T t WRITEs, so T = T s + T t . Similar to standard MT, we have a measure Q(Y ) to evaluate the translation quality, such as BLEU score In the following sections, we first describe how to connect the READ/WRITE actions with the NMT system ( §3), and how to optimize the system to improve simultaneous MT results ( §4). The proposed framework is shown in Fig. Encoder: READ The first element of the NMT system is the encoder, which converts input words X = {x 1 , ..., x Ts } into context vectors H = {h 1 , ..., h Ts }. Standard NMT uses bi-directional RNNs as encoders Decoder: WRITE Similar with standard MT, we use an attention-based decoder. In contrast, we only reference the words that have been read from the input when generating each target word: where for τ , z τ -1 and y τ -1 represent the previous decoder state and output word, respectively. H η is used to represent the incomplete input states, where H η is a prefix of H. As the WRITE action calculates the probability of the next word on the fly, we need greedy decoding for each step: Note that y η τ , z η τ corresponds to H η and is the candidate for y τ , z τ . The agent described in the next section decides whether to take this candidate or wait for better predictions. A trainable agent is designed to make decisions Action Similarly to prior work • READ: the agent rejects the candidate and waits to encode the next word from input buffer; • WRITE: the agent accepts the candidate and emits it as the prediction into output buffer; Policy How the agent chooses the actions based on the observation defines the policy. In our setting, we utilize a stochastic policy π θ parameterized by a recurrent neural network, that is: where s t is the internal state of the agent, and is updated recurrently yielding the distribution of the action a t . Based on the policy of our agent, the overall algorithm of greedy decoding is shown in Algorithm 1, The algorithm outputs the translation result and a sequence of observation-action pairs. Algorithm 1 Simultaneous Greedy Decoding Require: NMT system φ, policy π θ , τ MAX , input buffer X, output buffer Y , state buffer S. if a t = READ and x η = /s then 9: 10: else if a t = WRITE then 13: if y τ = /s then break The proposed framework can be trained using reinforcement learning. More precisely, we use policy gradient algorithm together with variance reduction and regularization techniques. We need an NMT environment for the agent to explore and use to generate translations. Here, we simply pre-train the NMT encoder-decoder on full sentence pairs with maximum likelihood, and assume the pre-trained model is still able to generate reasonable translations even on incomplete source sentences. Although this is likely sub-optimal, our NMT environment based on uni-directional RNNs can treat incomplete source sentences in a manner similar to shorter source sentences and has the potential to translate them more-or-less correctly. The policy is learned in order to increase a reward for the translation. At each step the agent will receive a reward signal r t based on (o t , a t ). To evaluate a good simultaneous machine translation, a reward must consider both quality and delay. Quality We evaluate the translation quality using metrics such as BLEU The BLEU score is defined as the weighted geometric average of the modified n-gram precision BLEU 0 , multiplied by the brevity penalty BP to punish a short translation. In practice, the vanilla BLEU score is not a good metric at sentence level because being a geometric average, the score will reduce to zero if one of the precisions is zero. To avoid this, we used a smoothed version of BLEU for our implementation where Y * is the reference and Y is the output. We decompose BLEU and use the difference of partial BLEU scores as the reward, that is: where Y t is the cumulative output at t (Y 0 = ∅), and Obviously, if a t = READ, no new words are written into Y , yielding r Q t = 0. Note that we do not multiply BP until the end of the sentence, as it would heavily penalize partial translation results. Delay As another critical feature, delay judges how much time is wasted waiting for the translation. Ideally we would directly measure the actual time delay incurred by waiting for the next word. For simplicity, however, we suppose it consumes the same amount of time listening for one more word. We define two measurements, global and local, respectively: • Average Proportion (AP): following the definition in d is a global delay metric, which defines the average waiting proportion of the source sentence when translating each word. • Consecutive Wait length (CW): in speech translation, listeners are also concerned with long silences during which no translation occurs. To capture this, we also consider on how many words were waited for (READ) consecutively between translating two words. For each action, where we initially define c 0 = 0, • Target Delay: We further define "target delay" for both d and c as d * and c * , respectively, as different simultaneous translation applications may have different requirements on delay. In our implementation, the reward function for delay is written as: Trade-off between quality and delay A good simultaneous translation system requires balancing the trade-off of translation quality and time delay. Obviously, achieving the best translation quality and the shortest translation delays are in a sense contradictory. In this paper, the trade-off is achieved by balancing the rewards r t = r Q t +r D t provided to the system, that is, by adjusting the coefficients α, β and the target delay d * , c * in Eq. 9. Policy Gradient We freeze the pre-trained parameters of an NMT model, and train the agent using the policy gradient is the cumulative future rewards for current observation and action. In practice, Eq. 10 is estimated by sampling multiple action trajectories from the current policy π θ , collecting the corresponding rewards. Directly using the policy gradient suffers from high variance, which makes learning unstable and inefficient. We thus employ the variance reduction techniques suggested by We also regularize the negative entropy of the policy to facilitate exploration. Algorithm 2 Learning with Policy Gradient Require: NMT system φ, agent θ, baseline ϕ 1: Pretrain the NMT system φ using MLE; 2: Initialize the agent θ; 3: while stopping criterion fails do 4: Obtain a translation pairs: {(X, Y * )}; 5: for (Y, S) ∼ Simultaneous Decoding do 6: for (o t , a t ) in S do 7: Compute the quality: r Q t ; 8: Compute the delay: r D t ; 9: Compute the baseline: b ϕ (o t ); 10: Collect the future rewards: {R t }; 11: Perform variance reduction: { Rt }; 12: Update: The overall learning algorithm is summarized in Algorithm 2. For efficiency, instead of updating with stochastic gradient descent (SGD) on a single sentence, both the agent and the baseline are optimized using a minibatch of multiple sentences. In previous sections we described a simultaneous greedy decoding algorithm. In standard NMT it has been shown that beam search, where the decoder keeps a beam of k translation trajectories, greatly improves translation quality It is non-trivial to directly apply beam-search in simultaneous machine translation, as beam search waits until the last word to write down translation. Based on our assumption WRITE does not cost delay, we can perform a simultaneous beam-search when the agent chooses to consecutively WRITE: keep multiple beams of translation trajectories in temporary buffer and output the best path when the agent switches to READ. As shown in Fig. Note that we do not re-train the agent for simultaneous beam-search. At each step we simply input the observation of the current best trajectory into the agent for making next decision. To extensively study the proposed simultaneous translation model, we train and evaluate it on two different language pairs: "English- German (EN-DE)" and "English-Russian (EN-RU)" in both directions per pair. We use the parallel corpora available from WMT'15 Environment & Agent Settings We pre-trained the NMT environments for both language pairs and both directions following the same setting from We compare the proposed methods against previously proposed baselines. For fair comparison, we use the same NMT environment: • Wait-Until-End (WUE): an agent that starts to WRITE only when the last source word is seen. In general, we expect this to achieve the best quality of translation. We perform both greedy decoding and beam-search with this method. • Wait-One-Step (WOS): an agent that WRITEs after each READs. Such a policy is problematic when the source and target language pairs have different word orders or lengths (e.g. EN-DE). Every time we only keep one target for one delay measure. For instance when using target AP, the coefficient of α in Eq. 9 will be set 0. For each target, we select the model that maximizes the quality-to-delay ratio ( BLEU AP ) on the validation set. The baselines are also plotted ( : WOS $: WUE, ×: WID, +: WIW). • Wait-If-Worse/Wait-If-Diff (WIW/WID): as proposed by • Segmentation-based (SEG) In order to evaluate the effectiveness of our reinforcement learning algorithms with different re-ward functions, we vary the target delay d * ∈ {0.3, 0.5, 0.7} and c * ∈ {2, 5, 8} for Eq. 9 separately, and trained agents with α and β adjusted to values that provided stable learning for each language pair according to the validation set. As shown in Fig. As shown in Fig. In Fig. Compared to the method of Cho and Esipova (2016) based on two hand-crafted rules (WID, WIW), in most cases our proposed models find better trade-off points, while there are a few exceptions. We also observe that the baseline models have trouble controlling the delay in a reasonable area. In contrast, by optimizing towards a given target delay, our proposed model is stable while maintaining good translation quality. We also compared against Oda et al. ( w/o Beam-Search We also plot the results of simultaneous beam-search instead of using greedy decoding. It is clear from Fig. In this section, we perform a more in-depth analysis using examples from both EN-RU and EN-DE pairs, in order to have a deeper understanding of the proposed algorithm and its remaining limitations. We only perform greedy decoding to simplify visualization. As shown in Fig Researchers commonly consider the problem of simultaneous machine translation in the scenario of real-time speech interpretation Recently, two research groups have tried to apply the NMT framework to the simultaneous translation task. The proposed framework is also related to some recent efforts about online sequence-to-sequence (SEQ2SEQ) learning. We propose a unified framework to do neural simultaneous machine translation. To trade off quality and delay, we extensively explore various targets for delay and design a method for beamsearch applicable in the simultaneous MT setting. Experiments against state-of-the-art baselines on two language pairs demonstrate the efficacy both quantitatively and qualitatively. | 720 | 1,096 | 720 |
Punctuation: Making a Point in Unsupervised Dependency Parsing | We show how punctuation can be used to improve unsupervised dependency parsing. Our linguistic analysis confirms the strong connection between English punctuation and phrase boundaries in the Penn Treebank. However, approaches that naively include punctuation marks in the grammar (as if they were words) do not perform well with Klein and Manning's Dependency Model with Valence (DMV). Instead, we split a sentence at punctuation and impose parsing restrictions over its fragments. Our grammar inducer is trained on the Wall Street Journal (WSJ) and achieves 59.5% accuracy out-of-domain (Brown sentences with 100 or fewer words), more than 6% higher than the previous best results. Further evaluation, using the 2006/7 CoNLL sets, reveals that punctuation aids grammar induction in 17 of 18 languages, for an overall average net gain of 1.3%. Some of this improvement is from training, but more than half is from parsing with induced constraints, in inference. Punctuation-aware decoding works with existing (even already-trained) parsing models and always increased accuracy in our experiments. | Unsupervised dependency parsing is a type of grammar induction -a central problem in computational linguistics. It aims to uncover hidden relations between head words and their dependents in free-form text. Despite decades of significant research efforts, the task still poses a challenge, as sentence structure is underdetermined by only raw, unannotated words. Structure can be clearer in formatted text, which typically includes proper capitalization and punctuation HTML is another kind of meta-data that is ordinarily stripped out in pre-processing. However, recently ..., whereas McCain is secure on the topic, Obama <a>[ VP worries about winning the pro-Israel vote]</a>. We propose exploring punctuation's potential to aid grammar induction. Consider a motivating example (all of our examples are from WSJ), in which all (six) marks align with constituent boundaries: This link between punctuation and constituent boundaries suggests that we could approximate parsing by treating inter-punctuation fragments independently. In training, our algorithm first parses each fragment separately, then parses the sequence of the resulting head words. In inference, we use a better approximation that allows heads of fragments to be attached by arbitrary external words, e.g.: The Soviets complicated the issue by offering to [ VP include light tanks], [ SBAR which are as light as ... ]. | Frac Punctuation and syntax are related But are there simple enough connections between the two to aid in grammar induction? This section explores the regularities. Our study of punctuation in WSJ Out of 51,558 sentences, most -37,076 (71.9%)contain sentence-internal punctuation. These punctuated sentences contain 123,751 fragments, nearly all -111,774 (90.3%) -of them multi-token. Common part-of-speech (POS) sequences comprising fragments are diverse (note also their flat distribution -see Table This production is flagged because the fragment NP VP is not a constituent -it is two; still, 49.4% of all fragments do align with whole constituents. Inter-punctuation fragments correspond more strongly to dependencies (see Table Let [x, y] be a fragment (or markup) spanning positions x through y (inclusive, with 1 ≤ x < y ≤ l), in a sentence of length l. And let [i, j] h be a sealed span headed by h (1 ≤ i ≤ h ≤ j ≤ l), i.e., the word at position h dominates precisely i . . . j (but none other): Define inside(h, x, y) as true iff x ≤ h ≤ y; and let cross(i, j, x, y) be true iff The three tightest constraints impose conditions which, when satisfied, disallow sealing ... the British daily newspaper, The Financial Times . requires that everything in x . . . y fall under h, with only h allowed external attachments. This holds for 74.0% of fragments -87.5% of markup, failing when cross(i, j, x, y). ... arrests followed a " Snake Day " at Utrecht ... i x h = j = y sprawl -still requires that h derive x . . . y but lifts restrictions on external attachments. Holding for 92.9% of fragments (95.1% of markup), it fails when cross(i, j, x, y) ∧ ¬inside(h, x, y). Maryland Club also distributes tea , which ... These three strictest constraints lend themselves to a straight-forward implementation as an O(l 5 ) chartbased decoder. Ordinarily, the probability of [i, j] h is computed by multiplying the probability of the associated unsealed span by two stopping probabilities -that of the word at h on the left (adjacent if i = h; non-adjacent if i < h) and on the right (adjacent if h = j; non-adjacent if h < j). To impose a constraint, we ran through all of the annotations [x, y] associated with a sentence and zeroed out this probability if any of them satisfied disallowed conditions. There are faster -e.g., O(l 4 ), and even O(l 3 )recognizers for split head automaton grammars Relaxed constraints disallow joining adjacent subtrees, e.g., preventing the seal [i, j] h from merging below the unsealed span [j + 1, J] H , on the left: tear -prevents x . . . y from being torn apart by external heads from opposite sides. It holds for 94.7% of fragments (97.9% of markup), and is violated when (x ≤ j ∧ y > j ∧ h < x), in this case. ... they "were not consulted about the [Ridley decision ] in advance and were surprised at the action taken . thread -requires only that no path from the root to a leaf enter again, in this case. Example that satisfies thread but violates tear: The ... changes "all make a lot of sense to me," he added. The case when [i, j] h is to the right is entirely symmetric, and these constraints could be incorporated in a more sophisticated decoder (since i and J do not appear in the formulae, above). We implemented them by zeroing out the probability of the word at H attaching that at h (to its left), in case of a violation. Note that all five constraints are nested. In particular, this means that it does not make sense to combine them, for a given annotation [x, y], since the result would just match the strictest one. Our markup number for tear is lower (97.9 versus 98.9%) than Common structures that violate thread (and, consequently, all five of the constraints) include, e.g., "seamless" quotations and even ordinary lists: Her recent report classifies the stock as a "hold." The company said its directors, management and subsidiaries will remain long-term investors and ... Most punctuation-induced constraints are less accurate than the corresponding markup-induced constraints (e.g., sprawl: 92.9 vs. 95.1%; loose: 74.0 vs. 87.5%; but not strict: 39.2 vs. 35.6%). However, markup is rare: Spitkovsky et al. (2010b, §5.1) observed that only 10% of the sentences in their blog were annotated; in contrast, over 70% of the sentences in WSJ are fragmented by punctuation. Fragments are more than 40% likely to be dominated by a clause; for markup, this number is below 10% -nearly 75% of it covered by noun phrases. Further, inter-punctuation fragments are spread more evenly under noun, verb, prepositional, adverbial and adjectival phrases (approximately 27:13:10:3:1 versus 75:13:2:1:1) than markup. We model grammar via Our system is based on Laplace-smoothed Viterbi EM, following Training with punctuation replaces ordinary Viterbi parse trees, at every iteration of EM, with the output of a constrained decoder. In all experiments other than #2 ( §5) we train with the loose constraint. Constrained Inference We trained on the Penn English Treebank's Wall Street Journal portion We report directed accuracies -fractions of correctly guessed arcs, including the root, in unlabeled reference dependency parse trees, as is also standard practice Our primary baseline is the basic system without constraints (standard training). It ignores punctuation, as is standard, scoring 52.0% against WSJ45. A secondary (punctuation as words) baseline in-corporates punctuation into the grammar as if it were words, as in supervised dependency parsing Our first experiment compares "punctuation as constraints" to the baseline systems. We use default settings, as recommended by To facilitate comparison with prior work, we also report accuracies against shorter sentences, with up to ten non-punctuation tokens (WSJ10 -see Table These are multi-point increases, but they could disappear in a more accurate state-of-the-art system. To test this hypothesis, we applied constrained decoding to a supervised system. We found that this (ideal) instantiation of the DMV benefits as much or more than the unsupervised systems: accuracy increases from 69.8% to 73.0%. Punctuation seems to capture the kinds of, perhaps long-distance, regularities that are not accessible to the model, possibly because of its unrealistic independence assumptions. 5 Experiment #2: Optimal Settings We next re-examined the choices of constraints. Our full factorial analysis was similar, but significantly smaller, than A full analysis is omitted due to space constraints. Our first observation is that constrained inference, using punctuation, is helpful and robust. It boosted accuracy (on WSJ45) by approximately 1.5%, on average, with all settings. Indeed, sprawl was consistently (but only slightly, at 1.6%, on average) better than the rest. Second, constrained training hurt more often than it helped. It degraded accuracy in all but one case, loose, where it gained approximately 0.4%, on average. Both improvements are statistically significant: p ≈ 0.036 for training with loose; and p ≈ 5.6 × 10 -12 for decoding with sprawl. So far, punctuation has improved grammar induction in a toy setting. But would it help a modern system? Our next two experiments employ a slightly more complicated set-up, compared with the one used up until now ( §3.1). The key difference is that this system is lexicalized, as is standard among the more accurate grammar inducers The purpose of these experiments is to compare the punctuation-enhanced DMV with other, recent stateof-the-art systems. We find that, lexicalized ( §6), our approach performs better, by a wide margin; without lexicalization ( §3.1), it was already better for longer, but not for shorter, sentences (see Tables We trained a variant of our system without gold part-of-speech tags, using the unsupervised word clusters This final batch of experiments probes the generalization of our approach ( §6) across languages. The data are from 2006/7 CoNLL shared tasks 7 With the exception of Arabic '07, from which we discarded one sentence with 145 tokens. We down-weighed languages appearing in both years by 50% in our analyses, and excluded Chinese entirely, since it had already been cut up at punctuation. 8 Note that punctuation was treated differently in the two years: in '06, it was always at the leaves of the dependency trees; in '07, it matched original annotations of the source treebanks. For both, we used punctuation-insensitive scoring ( §3.2). We did not detect synergy between the two improvements. However, note that without constrained training, "full" data sets do not help, on average, despite having more data and lexicalization. Furthermore, after constrained training, we detected no evidence of benefits to additional retraining: not with the relaxed sprawl constraint, nor unconstrained. Punctuation has been used to improve parsing since rule-based systems Parsing Techniques Most-Similar to Constraints A "divide-and-rule" strategy that relies on punctuation has been used in supervised constituent parsing of long Chinese sentences Incorporating partial bracketings into grammar induction is an idea tracing back to State-of-the-art in unsupervised dependency parsing We pursued a complementary strategy: using Klein and Manning's (2004) much simpler Dependency Model with Valence (DMV), but persistently steering training away from certain constructions, as guided by punctuation, to help prevent underfitting. Punctuation is hard to predict, 9 partly because it can signal long-range dependences tection in Wikipedia Punctuation improves dependency grammar induction. Many unsupervised (and supervised) parsers could be easily modified to use sprawl-constrained decoding in inference. It applies to pre-trained models and, so far, helped every data set and language. Tightly interwoven into the fabric of writing systems, punctuation frames most unannotated plaintext. We showed that rules for converting markup into accurate parsing constraints are still optimal for inter-punctuation fragments. Punctuation marks are more ubiquitous and natural than web markup: what little punctuation-induced constraints lack in precision, they more than make up in recall -perhaps both types of constraints would work better yet in tandem. For language acquisition, a natural question is whether prosody could similarly aid grammar induction from speech Our results underscore the power of simple models and algorithms, combined with common-sense constraints. They reinforce insights from joint modeling in supervised learning, where simplified, independent models, Viterbi decoding and expressive constraints excel at sequence labeling tasks | 1,097 | 1,387 | 1,097 |
Polyglot Prompting: Multilingual Multitask Prompt Training | This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i.e. without any task/language-specific module? The benefit of achieving this could open new doors for future multilingual research, including allowing systems trained on low resources to be further assisted by other languages as well as other tasks. We approach this goal by developing a learning framework named Polyglot Prompting to exploit prompting methods for learning a unified semantic space for different languages and tasks with multilingual prompt engineering. We performed a comprehensive evaluation of 6 tasks, namely topic classification, sentiment classification, named entity recognition, question answering, natural language inference, and summarization, covering 24 datasets and 49 languages. The experimental results demonstrated the efficacy of multilingual multitask prompt-based learning and led to inspiring observations. We also present an interpretable multilingual evaluation methodology and show how the proposed framework, multilingual multitask prompt training, works. We release all datasets prompted in the best setting and code. | The emergence of multilingual pre-trained language models sated through the higher-resource languages shared with them. Despite the preliminary success in the lowresource scenarios using shared knowledge across languages in multilingual language models Unifying different tasks into one framework can be challenging if we are to avoid introducing additional task-specific parameterized modules. Recently, the success of the prompting methods In this paper, we leverage prompt techniques to cross the boundaries of different tasks and languages so that multiple tasks in different languages can be placed in a monolithic framework (as shown in Fig. We name this multilingual multitask training model as Polyglot Prompting (PolyPrompt). Different tasks from different languages can then be seamlessly connected together by being reformulated as pre-training tasks. Architecturally, we choose the encoder-decoder pre-training framework so that more NLP tasks could be unified, as compared to other architectures such as the mask language model that favors classification-based tasks. Our explorations in this paper are driven by following research questions: Q1: Can different tasks from different languages benefit from each other by a monolithic framework? If the answer is "yes", can the performance be further improved by introducing more high-resource datasets that are more readily available? | Multitask & Multilingual Learning The developments of neural networks have made it easier to share information across tasks or languages. As such, in the past few years, there has been much work on multitask learning within the same language Prompting Methods Prompting is a technique that aims to make better use of pre-trained knowledge by reformulating tasks at hand accordingly Context:黑豹队的防 守只丢了 308分… Question: 谁为球队 贡献的擒杀最多? We unify different tasks from different languages by reformulating each NLP task as a sequenceto-sequence problem We choose a sequence-to-sequence language model to achieve multilingual multitask prompt training, where samples from n tasks will be the input of the chosen language model. The loss function is to maximize the log-likelihood of the output text and can be defined as: where (x, ŷ) ∈ Z represents a sequence-tosequence text pair for any task. |ŷ| is the number of tokens in the decoded text, and ŷ<m is the target tokens before the time step m. 4 Experiment Setup Model We list 5 models explored in this work. (1) Vanilla mT5: In the cross-lingual zero-shot transfer setting, mT5 is trained on the training set in English of the specific task (e.g. XNLI), while in the in-lingual training setting, mT5 is trained on the training samples in all languages for the particular task (e.g. XNLI). (2) Polyglot Prompt (PolyPrompt) is a standard multilingual multitask prompt training model,which is trained on 7 target datasets covering 4 NLP tasks (e.g., QA). (3) PolyPrompt+Expand is the PolyPrompt model trained on the 7 target datasets and 15 highresource (English) expanding datasets. (4) PolyPrompt+Expand+PANX is the PolyPrompt trained on the 7 target datasets, 15 high-resource datasets, and a multilingual NER dataset (PANX). (5) PolyPrompt+Expand+XLSum is the PolyPrompt trained on the 7 target datasets, 15 high-resource datasets, and a multilingual summarization dataset (XL-Sum). Parameters The PolyPrompt model is built based on the mT5 Training Data Construction Some datasets have a large number of training samples, for example, XNLI has 4.5 million training samples. To reduce the expensive computational cost of our experiments, we randomly sampled 3, 000 samples from the training set for each language of the target datasets, and 5, 000 samples from each expanding dataset. These selected samples will serve as the training set for multilingual multitask prompt training with different experiment scenarios. Experimental Scenario We consider three experimental scenarios: (1) In-language training, fine-tuned on golden data in all target languages. Like (2) Crosslingual zero-shot transfer (3) Cross-task & cross-lingual zero-shot transfer, where a model is evaluated on tasks and languages that did not appear in its training dataset. The experiment in this section is designed to answer the research question Q1 in Sec.1, namely to investigate whether multilingual multitask prompt training (PolyPrompt) can achieve improvement, and whether the performance can be further improved by introducing more high-resource datasets. To examine whether the PolyPrompt and its variants are significantly better than the vanilla mT5, we perform the significance test with Wilcoxon's Signed-rank Test (1) Cross-lingual prompt can help better retrieve knowledge encoded in language model. We can observe from Fig. (2) The unified template outperforms the diversified template In Fig. This section aims at the research question Q2 (How do different characteristics of datasets and languages affect the performance of PolyPrompt?) by introducing a multilingual interpretable evaluation. Interpretable evaluation In this section, we try to find out what prompts or prompt combinations are suitable for multilingual and multitask scenarios (Q3). Although prompting methods have proven effective in many NLP scenarios, its effectiveness comes at the cost of prompt engineering uniformity of prompt templates designed for multilingual multitask setting. The examples of the considered prompt design can be seen in Tab. 4. Language Choice: we consider both the inlingual and cross-lingual prompts. In-lingual prompts are those in which the language of the prompt is the same as the target language To investigate whether PolyPrompt is better at retrieving relevant knowledge from pre-trained language models for tasks and languages unseen in training stage, we investigate vanilla mT5, PolyPrompt, and PolyPrompt+Expand fine-tuned on the English datasets and evaluate these three models on the PANX dataset, a named entity recognition task with 40 languages. We then subtract the performance of vanilla mT5 from PolyPrompt and PolyPrompt+Expand in the same language, and the results are shown in Fig. (1) Almost all languages benefit from both PolyPrompt and its variants. PolyPrompt brings gains for 34 of the 40 languages, and more languages will benefit when PolyPrompt is enhanced with high-resource English training datasets. Interestingly, PolyPrompt+Expand per- formed much better than PolyPrompt in languages belonging to IE: Germanic and IE: Romance language families, which made up a large proportion of samples in the pre-training corpus of mT5. (2) PolyPrompt significantly improves performance on languages that have never appeared in the pre-training corpus of mT5. Both PolyPrompt and PolyPrompt+Expand improve a lot over mT5 on tl, a language that never appeared in mT5's pretraining corpus. Furthermore, PolyPrompt+Expand achieves the best performance gain on tl. The reasons can be attributed to (1) we unify different tasks into a monolithic framework (including NER), which effectively shortens the distance between different tasks; (2) English (en) and tl share the same semantic space, NER knowledge in English (en) can be effectively transferred to tl. We can provide the following preliminary empirical answers to our research questions. (1) Can different tasks from different languages benefit from each other by a monolithic framework? Yes. What's more, introducing more highresource datasets can further improve the tasks' performance involved in multitask prompt training. (2) How do different characteristics of datasets and languages affect the performance of PolyPrompt? PolyPrompt cannot benefit all languages in all datasets. For example, (a) languages that appear only once in target datasets have benefits when PolyPrompt is enhanced by high-resource datasets; (b) PolyPrompt is better in short context samples for MLQA, long context samples for XQuAD, while poor in long question samples for XQuAD, TyDiQA, and MLQA. (3) What makes a good prompt for multilingual multitask prompt training? The best performance is achieved when the model is equipped with crosslingual prompts (i.e., using English as prompt templates regardless of what the language of training samples is) and prompts with unified templates across tasks. Although in this paper, we try to cover as many languages and tasks as possible, some tasks (e.g., semantic parsing, machine translation) and languages are still not considered. In addition, due to limited computational resources, we adopt a relatively small pre-trained language model, and the results on the larger pre-trained language models are also worth expecting. In addition, there are a variety of factors affecting the design of prompts in a multilingual setting. This paper only considers two (language choice and uniformity of prompt templates), so more comprehensive studies in this direction could be conducted. version of the primary task, discarding Thai and Japanese languages and samples without answers. Like XQuAD and MLQA, TyDiQA is evaluated with SQuAD 1.1 XNLI MLDOC Expanding tasks simply provide training sets for the multitask prompt training. In summary, we studied 15 English and 2 multilingual datasets. Extractive Question Answering is the task of finding an answer to a given question from the context. We adopt SQuAD 2.0 Multiple-choice Question Answering aims to select an answer from candidate options based on the context and question. In this work, we study MCTest Natural Language Inference aims to determine the inference relation (e.g. entailment) between two texts. The datasets used in this work are Quora, Topic Classification is a task to predict a suitable topic (e.g., health) for a given text. We use the following topic classification datasets: DBpedia-2014 Sentiment Classification aims to identify the sentiment polarity of a given text. We studied datasets IMDB XL-Sum For interpretable evaluation, the first step is attribute definition, and the second is sample breakdown. Assume that ϕ Len (x) is a function to calculate the number of tokens in the given text x, and ϕ BLUE (x 1 , x 2 ) is to compute the BLUE score of two given texts x 1 and x 2 . The following are the features tailored for the 7 multilingual datasets in this paper: • XQuAD, TyDiQA, MLQA: cLen=ϕ Len (X c ), qLen=ϕ Len (X q ), aLen=ϕ Len (X a ), and BLUE_AC= ϕ BLUE (X a , X c ), where X c , X q , and X a denote the context, question, and answer sequence, respectively. • PAWS-X, XNLI: t1Len = ϕ Len (X t1 ), t2Len = ϕ Len (X t2 ), t1Len/t2Len = ϕ Len (X t1 )/ϕ Len (X t2 ), and BLUE_t1t2 = ϕ BLUE (X t1 , X t2 ), where X t1 and X t2 denote the premise and hypothesis (sentence-1 and sentence-2 for PAWS-X) sequence. • MARC, MLDOC: t1Len=ϕ Len (X t1 ), t1basic = ϕ basic (X t1 ), and t1eNum = ϕ eNum (X t1 ), where X t1 denotes a sequence of review (news for ML-DOC). ϕ basic (x) and ϕ eNum (x) are functions to calculate the proportion of words belonging to the 1000 essential English words 6 and entities, respectively. We then follow Due to the space limitation, we summarize some main observations here. (1) Whether a language that appears in only one task could gain improvement depends on the difficulty of the task. In Fig. (2) PolyPrompt improves the performance of non-Indo-European languages a lot in the inlanguage training. From Fig. 6 (3) For low-resource languages, PolyPrompt with in-language prompts will bring more gains, while cross-lingual prompts bring more gains when introducing high-resource training datasets. From Fig. However, when the external English dataset was introduced (PolyPrompt+Expand), cross-language prompts have more gains in both low and high resource languages. With the introduction of multilingual datasets (PolyPrompt+Expand+XLSum), the relative advantages of cross-lingual prompts increased. Dataset-level Features We also obtain the dataset-level features. Given a dataset D and a feature p (e.g. qLen), the dataset-level feature can be defined as: where d is a sample of the test set D te ∈ D, and ϕ p (⋅) is a function that computes the feature value for a given sample. For example, ϕ qLen (MLQA) denotes the average question length of the MLQA. Dataset bias is measured by ϕ p , the dataset-level feature defined in Eq. 3. Tab. 8 shows five target datasets explored in Sec. 5.2. Tab. 5 presents the cross-lingual (English) prompt templates explored in this work. We designed 5 templates for each of the 7 tasks. | 1,237 | 1,395 | 1,237 |
QASR: QCRI Aljazeera Speech Resource A Large Scale Annotated Arabic Speech Corpus | We introduce the largest transcribed Arabic speech corpus, QASR 1 , collected from the broadcast domain. This multi-dialect speech dataset contains 2, 000 hours of speech sampled at 16kHz crawled from Aljazeera news channel. The dataset is released with lightly supervised transcriptions, aligned with the audio segments. Unlike previous datasets, QASR contains linguistically motivated segmentation, punctuation, speaker information among others. QASR is suitable for training and evaluating speech recognition systems, acoustics-and/or linguistics-based Arabic dialect identification, punctuation restoration, speaker identification, speaker linking, and potentially other NLP modules for spoken data. In addition to QASR transcription, we release a dataset of 130M words to aid in designing and training a better language model. We show that end-to-end automatic speech recognition trained on QASR reports a competitive word error rate compared to the previous MGB-2 corpus. We report baseline results for downstream natural language processing tasks such as named entity recognition using speech transcript. We also report the first baseline for Arabic punctuation restoration. We make the corpus available for the research community. | Research on Automatic Speech Recognition (ASR) has attracted a lot of attention in recent years Natural Language Processing (NLP), on the other hand values large amount of textual information for designing experiments. NLP research for Arabic has achieved a milestone in the last few years in morphological disambiguation, Named Entity Recognition (NER) and diacritization Our objective is to release the first Arabic speech and NLP corpus to study spoken MSA and DA. This is to enable empirical evaluation of learning more than the word sequence from the speech. In our view, existing speech and NLP corpora are missing the link between the two different modalities. Speech poses unique challenges such as disfluency (Pravin and Palanivelan, 2021), overlap speech In this paper, we create and release 2 the largest corpus for transcribed Arabic speech. It comprises of 2, 000 hours of speech data with lightly supervised transcriptions. Our contributions are: (i) aligning the transcription with the corresponding audio segments including punctuation for building ASR systems; (ii) providing semi-supervised speaker identification and speaker linking per audio segments; (iii) releasing baseline results for acoustic and linguistic Arabic dialect identification and punctuation restoration; (iv) adding a new layer of annotation in the publicly available MGB-2 testset, for evaluating NER for speech transcription; (v) sharing code-switching data between Arabic and foreign languages for speech and text; and finally, (vi) releasing more than 130M words for Language Model (LM). We believe that providing the research community with access to multi-dialectal speech data along with the corresponding NLP features will foster open research in several areas, such as the analysis of speech and NLP processing jointly. Here, we build models and share the baseline results for all of the aforementioned tasks. | The CallHome task within the NIST benchmark evaluations framework The following datasets are released from the Multi-Genre Broadcast MGB challenge: (i) MGB-2 We obtained Aljazeera Arabic news channel's archive (henceforth AJ), spanning over 11 years from 2004 until 2015. It contains more than 4, 000 episodes from 19 different programs. These programs cover different domains like politics, society, economy, sports, science, etc. For each episode, we have the following: (i) audio sampled at 16KHz; (ii) manual transcription, the textual transcriptions contained no timing information. The quality of the transcription varied significantly; the most challenging were conversational programs in which overlapping speech and dialectal usage was more frequent; and finally (iii) some metadata. For better evaluation of the QASR corpus, we reused the publicly available MGB-2 Most of the recorded programs have the following metadata: program name, episode title and date, speaker names and topics of the episode. Majority of metadata information appear in the beginning of the file. However, some of them are embedded inside the episode transcription. Figure Speech and text are aligned (see details in Section 2.3) and split into short segments (see Section 2.5). For each segment, we provide: words (element), timing information (starttime and endtime) in addition to speaker ID (who), Average Word Duration (AWD) in seconds, Grapheme Match Error Rate (GWER), and Word Match Error Rate (WMER). For details about word and grapheme match, refer to The main concept of this method is to run an Arabic speech recognition system over the entire episode For alignment, Aljazeera and ASR transcriptions are then converted into two long sequences of words. Aligning the sequences was challenging for many reasons; code-switching between MSA and dialects; human transcription was not verbatim, e.g. some spoken words were dropped due to repetition or correction; spelling and grammar mistakes; usage of foreign languages mainly English and French; and many overlapped speeches. We used Smith-Waterman algorithm Figure Figure After aligning the given transcription with the ASR words for the whole episode, we want to segment the text into shorter segments. Unlike MGB-2, we considered many factors that we believe lead to better and logical segmentation, namely: • Surface: We tried to make segments in the range of [3-10] words. We consider punctuation • Dialog: When a speaker changes in the transcribed text, we consider this as a valid end of segment. By doing this, we assign only one speaker to each segment. • Acoustics: If there is a silence duration of at least 150msec between words, we consider this as a signal to potentially end the current segment. We consider the proceeding linguistic rules to confirm the validity of this end. • Linguistics: For linguistically motivated segmentation, we want to avoid ending segments in wrong places (e.g. in the middle of Named Entities (NE), Noun Phrases (NP) or Adjective Phrases (AP)). To do so, from the 130M words in the LM data, we extracted the most frequent 10K words that were not followed by any punctuation in 90% of the cases, then we revised them manually (in, leads-to, towards). Additionally, we used the publicly available Arabic NLP tools (Farasa) We discuss here the presence of intrasentential code-switching in QASR. We noticed in addition to the intrasentential dialectal code switching (discussed in Section 3.4), the dataset also includes ≈ 6K segments, where alternation between Arabic and English/French languages are seen. To quantify the amount of code-switching present in this data, we calculate both the utterance and corpus level Code-Mixing Index (CMI), motivated by Furthermore, from utterance-level analysis, we notice that the majority of the code-switched segments falls under 15 < CM I ≤ 30% with an average of 2 alteration points per segment (e.g. Ar → En → Ar). Even though the code-switching occurs in only 0.4% of the full dataset, we notice that we have very short ≈ 968 segments (ranging CMI value > 30%) with frequent alternating language code, such as: " duplex Building". In the future, these segments could be used to further explore the effect of such code-switching in the performance of speech and NLP models jointly. 3 Downstream Tasks In this section, we study QASR dataset for the ASR task. We adopt the End-to-End Transformer (E2E-T) architecture from It can be seen that the best E2E-T-MGB-2 achieves slightly better WER with a difference of 0.3% on average. This is expected since adopted E2E-T architecture was carefully tuned on MGB-2 dataset. However, the E2E-T-QASR achieves lower substitution and insertion rates with an absolute difference of 2.7% and 0.5% on average respectively. It can also be noticed that almost half of the E2E-T-QASR errors are due to deletions. To investigate these results further, we visualize the distribution of segmentation duration of the MGB-2 train, the QASR train and the testsets as shown in Figure We adapt a simple transformer-biLSTM architecture During the training, special tokens identifying start-and end-of the sentence are added to the in-put subword sequence. Despite the fact that Arabic has a skewed distribution in punctuation, the baseline results reported in Table One of the biggest challenges in broadcast domain is its speech diversity. The anchor speaker voice is often clear and planned. However, the spoken style 11 of different program guests can present various challenges. Here, we showcase how QASR could be used to evaluate existing speaker models based on the speakers' role in each episode. In the future, the dataset can also be used to study turntaking and speaker dynamics, given the interaction between speakers in QASR. Sets EER Total Pairs Anchor 9.2 40K (75% male) Guest 7.5 40K (100% male) Mixed 7.9 40K (75% male) VoxCeleb1-tst 6.8 38K We adapt one of the widely-known architectures used to model an end-to-end text-independent Speaker Recognition (SR) system. For the study, we use a pre-trained model, with four temporal convolution neural networks followed by a global (statistical) pooling layer and then two fully connected layers. The input to the model is MFCCs features (with 40 coefficient) computed with a 25msec window and 10ms frame-rate from the 16KHz audio. The model is trained on Voxceleb1 For speaker verification, we use verified same/different-speaker pairs of speech segments as input. We extract the length normalized embed- 11 The style can vary based on language fluency, speech rate, use of different dialects among other factors. dings from the last layer of the SR model and then computed the cosine similarity between pairs. For our evaluation, we constructed these verification pair trials by randomly picking up 40K utterance pairs from: (i) speakers of the same gender; (ii) similar utterance lengths; and (iii) a balanced distribution between positive and negative targets From the results, we observe that the SR model effectively distinguishes between the positive and negative pairs with ≈ 70% (A) -72% (G) accuracy. Comparing the EER, we notice that it is harder to differentiate between anchors than guests. This can be due to the fact that anchors are using the same acoustic conditions, and the current models are learning recording conditions To understand the dialectal nature of QASR dataset, we analyze the acoustic and lexical representations for 100 segments from each speaker To obtain the dialect labels, we run the pretrained dialect identification models for both speech and text modality. We address the dialect identification as multi-stage classification: Firstly, we predict the labels of the segments -MSA vs DA -and, secondly, if the label is DA, we further propagate the labels to detect the country of the selected speaker (i.e fine-grained dialect classification). For country level evaluation, we manually annotate each speaker's country label (see Table For lexical modality, we use the pre-trained QADI We observe that in both the modalities, 50% of the anchors speak MSA in 70% of the time in speech and 90% of the time in text. As for the other 50%, we notice that using the dialect identification modules, we can detect only 20% of the speaker's nationality correctly. The aforementioned observations are pre-anticipated, as anchors are professionally trained to speak mostly in MSA, making it harder for the model to predict the correct country label. This also explains why the large portion of the data is MSA. As for guest speakers, we notice that the lexical classifier detected that 30% of the speakers use MSA, while 70% of the speakers were detected as DA. As for the acoustic models, we notice that all speakers use dialects more than 70% of the time. Comparing the accuracy of identifying the correct dialects based on annotated country labels, we notice that both the text and acoustic models perform comparatively better in identify the guest speakers' country -64% from text and 65% from acoustic. Our hypothesis for such increase in performance is that guest speakers, unlike the anchors, mostly speak using their dialects, making it easier for the model to infer their country. When comparing the decision from both modalities, we notice that there is an agreement of 67.5% (65% for anchor and 70% for guest speakers) for MSA/DA classification. Most of the classification errors in speech and text dialect identification models are due to confusion between dialects spoken in neighboring countries; e.g. Syria and Lebanon in the Levantine region; Tunisia and Algeria in the North African region. NER is essential for a variety of NLP applications such as information extraction and summarization. There are many researches on Arabic NER for news articles, e.g. ANERcorp In this paper, we introduce a 2, 000 hours transcribed Arabic speech corpus, QASR. We report results for automatic speech recognition, Arabic dialect identification, speaker verification, and punctuation restoration to showcase the importance and usability of the dataset. QASR is also the first Arabic speech-NLP corpus to study spoken modern standard Arabic and dialectal Arabic. We report for the first time named entity recognition in Arabic news transcription. The 11, 092 unique speakers present in QASR can be used to study turn-taking and speaker dynamics in the broadcast domain. The corpus can also be useful for unsupervised methods to select speaker for text to speech | 1,238 | 1,906 | 1,238 |
Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation | Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneously and significantly accelerates inference process. However, NAT discards the dependency information in a sentence, and thus inevitably suffers from the multi-modality problem: the target tokens may be provided by different possible translations, often causing token repetitions or missing. To alleviate this problem, we propose a novel semiautoregressive model RecoverSAT in this work, which generates a translation as a sequence of segments. The segments are generated simultaneously while each segment is predicted token-by-token. By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors. Experimental results on three widelyused benchmark datasets show that our proposed model achieves more than 4× speedup while maintaining comparable performance compared with the corresponding autoregressive model. * indicates equal contribution † indicates corresponding author Src. es gibt heute viele Farmer mit diesem Ansatz Feasible there are lots of farmers doing this today Trans. there are a lot of farmers doing this today Trans. 1 there are lots of of farmers doing this today Trans. 2 there are a lot farmers doing this today | Although neural machine translation (NMT) has achieved state-of-the-art performance in recent years Recently, non-autoregressive neural machine translation (NAT) models Intensive efforts have been devoted to alleviate the above problem, which can be roughly divided into two lines. The first line of work leverages the iterative decoding framework to break the independence assumption, which first generates an initial translation and then refines the translation The segments are generated simultaneously while each segment is generated token-by-token conditioned on both the source tokens and the translation history of all segments (e.g., the token "are" in the first segment is predicted based on all the tokens colored green). Repetitive segments (e.g., the third segment "lots of") are detected and deleted automatically. iteratively by taking both the source sentence and the translation of last iteration as input To alleviate the multi-modality problem while maintaining a reasonable decoding speedup, we propose a novel semi-autoregressive model named RecoverSAT in this work. RecoverSAT features in three aspects: (1) To improve decoding speed, we assume that a translation can be divided into several segments which can be generated simultaneously. (2) To better capture target-side dependency, the tokens inside a segment is autoregressively generated conditioned not only on the previously generated tokens in this segment but also on those in other segments. On one hand, we observe that repetitive tokens are more likely to occur within a short context. Therefore, autoregressively generating a segment is beneficial for reducing repetitive tokens. On the other hand, by conditioning on previously generated tokens in other segments, the model is capable of guessing what feasible translation candidates have been chosen by each segment and adapts accordingly, e.g., recovering from missing token errors. As a result, our model captures more targetside dependency such that the multi-modality problem can be alleviated naturally. (3) To make the model capable of recovering from repetitive token errors, we introduce a segment deletion mechanism into our model. Informally speaking, our model will mark a segment to be deleted once it finds the content has been translated in other segments. We conduct experiments on three benchmark datasets for machine translation to evaluate the proposed method. The experimental results show that RecoverSAT is able to decode over 4× faster than the autoregressive counterpart while maintaining comparable performance. The source code of this work is released on | Autoregressive neural machine translation (AT) generates the translation token-by-token conditioned on translation history. Denoting a source sentence as x = {x i } T i=1 and a target sentence as y = {y j } T j=1 , AT models the joint probability as: where y <t denotes the generated tokens before y t . During decoding, the translation history dependency makes the AT model predict each token after all previous tokens have been generated, which makes the decoding process time-consuming. Non-autoregressive neural machine translation (NAT) The conditional independence enables the NAT models to generate all target tokens in parallel. However, independently predicting all target tokens is challenging as natural language often exhibits strong correlation across context. Since the model knows little information about surrounding target tokens, it may consider different possible translations when predicting different target tokens. The problem is known as the multi-modality problem RecoverSAT extends the original Transformer Formally, assuming a translation y is generated as K segments S where S i t denotes the t-th token in the i-th segment, } denotes the translation history in the i-th segment, and L is segment length. Here, two natural problems arise for the decoding process: • How to determine the length of a segment? • How to decide a segment should be deleted? We address the two problems in a uniform way in this work. Suppose the original token vocabulary is V , we extend it with two extra tokens EOS and DEL. Then for the segment S i , the most probable token Ŝi t at time step t: has three possibilities: (1) Ŝi t ∈ V : the segment S i is incomplete and the decoding process for it should continue; (2) Ŝi t = EOS: the segment S i is complete and the decoding process for it should terminate; (3) Ŝi t = DEL: the segment S i is repetitive and should be deleted. Accordingly, the decoding process for it should terminate. The entire decoding process terminates when all the segments meet EOS/DEL or reach the maximum token number. It should be noticed that we do not explicitly delete a segment when DEL is encountered but do it via post-processing. In other words, the model is trained to ignore the segment to be deleted implicitly. As there is little target-side information available in the early stage of the decoding process, the errors caused by the multi-modality problem is inevitable. In this work, instead of reducing such errors directly, we propose two training mechanisms to teach our RecoverSAT model to recover dynamically according to the sentence length. In other words, we can predict the target sentence length to determine the segment number during inference. In this case, our model can also decode in constant time. from errors: (1) Dynamic Termination Mechanism: learning to determine segment length according to target-side context; (2) Segment Deletion Mechanism: learning to delete repetitive segments. As shown in Section 3.1, instead of pre-specifying the lengths of segments, we let the model determine the lengths by emitting the EOS token. This strategy helps our model recover from multi-modality related errors in two ways: 1. The choice of the first few tokens is more flexible. Taking Figure 2. As shown in Eq. 3, a token is generated conditioned on all the previously generated tokens in all the segments. Therefore, the decoder has richer target-side information to detect and recover from such errors. However, it is non-trivial to train the model to learn such behaviour while maintaining a reasonable speedup. On one hand, as the decoding time of our RecoverSAT model is proportional to the maximum length of the segments, we should divide the target sentences of training instances into equal-length segments to encourage the model to generate segments with identical length. On the other hand, the model should be exposed to the multi-modality related errors to enhance its ability of recovering from such errors, which suggests that the target sentences of training instances should be divided randomly to simulate these errors. To alleviate the problem, we propose a mixed annealing dividing strategy. To be specific, we randomly decide whether to divide a target sentence equally or randomly at each training step and gradually anneal to the equally-dividing method at the end of training. Formally, given the target sentence y and the segment number K, we define the segment dividing indice set r as follows: where Bernoulli(p) is the Bernoulli distribution with parameter p, EQUAL(n, m) A larger value of p leads to better error recovering ability while a smaller one encourages the model to generate segments with similar lengths (in other words, better speedup). To balance the two aspects, we gradually anneal p from 1 to 0 in the training process, which achieves better performance (Section 4.5). Although the dynamic termination mechanism makes the model capable of recovering from missing token errors and reducing repetitive tokens, the model still can not recover from errors where token repetition errors have already occurred. We find the major errors of our model occur when generating the first token of each segment since it cannot see any history and future. In this situation, two repetitive segments will be generated. To alleviate this problem, we propose a segment-wise deletion strategy, which uses a special token DEL to indicate a segment is repetitive and should be deleted A straightforward way to train the model to learn to delete a segment is to inject pseudo repetitive segments into the training data. The following is an example: Given the target sentence "there are lots of farmers doing this today", we first divide it into 3 segments "there are", "lots of farmers" and "doing this today". Then we copy the first two tokens of the second segment and append the special token DEL to the end to construct a pseudo repetitive segment "lots of DEL". Finally, we insert the repetitive segment to the right of the chosen segment, resulting in 4 segments. Formally, given the expected segment number K and the target sentence y, we first divide and then build a pseudo repetitive segment S i rep by copying the first m tokens of a randomly chosen segment S i and appending DEL to the end, m is uniformly rep is inserted at the right side of S i . The final K segments are . However, injecting such pseudo repetitive segments to all training instances will mislead the model that generating then deleting a repetitive segment is a must-to-have behaviour, which is not desired. Therefore, we inject pseudo repetitive segment into a training instance with probability q in this work. We conduct experiments on three widely-used machine translation datasets: IWSLT16 En-De (196k pairs), WMT14 En-De (4.5M pairs) and WMT16 En-Ro (610k pairs). For fair comparison, we use the preprocessed datasets in For model hyperparameters, we follow most of the settings in We use the Transformer The performance of our RecoverSAT model and the baselines is shown in Table (1) Our RecoverSAT model achieves comparable performance with the AT baseline (Transformer) while keeping significant speedup. When K = 2, the BLEU score gap is moderate (from 0.06 to 0.4, even better than Transformer on the WMT16 En→Ro and Ro→En tasks) and the speedup is about 2×. When K = 10, the BLEU scores drop less than 5% relatively, and the speedup is considerably good (over 4×). (2) Our RecoverSAT model outperforms all the strong NAT baselines except CMLM (on the WMT16 En→Ro and Ro→En tasks). However, the performance gap is negligible (0.16 and 0.12 respectively), and CMLM is a multi-step NAT method which is significantly slower than our model. (3) As K grows, the BLEU scores drop moderately and the speedup grows significantly, indicating that our RecoverSAT model has a good generalizability. For example, the BLEU scores drop less than 0.45 when K grows from 2 to 5, and drop no more than 0.90 except on the WMT14 De→En task when K further grows to 10. Meanwhile, the speedup for K = 10 is larger than 4×, which is considerably good. (4) There are only 7 baselines (SynST, imitate-NAT+LPD, LV NAR, NART+LPD, FCL-NAT+NPD, ReorderNAT and NART-DCRF+LPD) achieving better speedup than our RecoverSAT model when K = 10. However, only Reorder-NAT and NART-DCRF+LPD achieve comparable BLEU scores with our model.The improvements of both ReorderNAT and NART-DCRF are complementary to our method. It is an interesting future work to join these works together. As discussed in Section 3.2.1, the dynamic termination mechanism is used to train our RecoverSAT model to learn to determine segment length dynamically conditioned on target-side context such that it is recoverable from multi-modality related errors. In this section, we investigate the effect of this mechanism and the results are shown in Table As multi-modality related errors generally manifest as repetitive or missing tokens in the translation, we propose two quantitative metrics "Rep" and "Mis" to measure these two phenomenons respectively. "Rep" is defined as the relative increment of repetitive token ratio w.r.t. to a reference AT model. And "Mis" is defined as the relative increment of missing token ratio given the references w.r.t. to a reference AT model. Formally, given the translations Ŷ = {ŷ 1 • • • ŷk • • • } produced by the model to be evaluated and the trans- where 1(cond) = 1 if the condition cond holds otherwise 0, and y k j is the j-th token of the translation sentence y k . Given Ŷ, Ŷauto and references Ȳ = The results are evaluated on the IWSLT16 En-De validation set. p is the parameter of Bernoulli distribution in Eq. 5. "Rep" and "Mis" measure the relative increment (%) of repetitive and missing token ratios (see Section 4.5), the smaller the better. " Step" denotes the average number of decoding steps. And "1→0" denotes annealing p from 1 to 0 linearly. where m(•, •) computes the missing token ratio and is defined as follows: where c(y, w) is the occurrence number of a token w in the sentence y. From Table In this section, we investigate the effect of the segment deletion mechanism and the results are shown in Table (1) Without using the segment deletion mechanism (q = 0), the BLEU score drops significantly and the repetitive token errors ("Rep") increase drastically, indicating that the mechanism is effective for recovering from repetitive token errors. (2) As q grows larger, the average number of decoding steps ("Step") increases steadily because the model is misled that to generate then delete a repetitive segment is expected. Thus, q should not be too large. (3) The repetitive token errors ("Rep") increase drastically when q > 0.7. We believe that the reason is that the pseudo repetitive segments are constructed randomly, making it hard to learn the underlying mapping. (4) The model achieves the best performance with q = 0.5. Therefore, we set q = 0.5 in our experiments. Figure die er greif endste Abteilung ist das Denk mal für die Kinder , das zum Ged enken an die 1,5 Millionen Kinder , die in den Konzent rations lagern und Gas k ammern vernichtet wurden , erbaut wurde . Reference the most tragic section is the children's mem orial , built in memory of 1.5 million children killed in concentration camps and gas cham bers . Translation the most tangible department department the monument monument the children , which was built commem commem orate 1.5 1.5 million children were destroyed in the concentration camps and gas cham bers . RecoverSAT (K = 10) There has been various work investigating to accelerate the decoding process of sequence generation models In this work, we propose a novel semiautoregressive model RecoverSAT to alleviate the multi-modality problem, which performs translation by generating segments non-autoregressively and predicts the tokens in a segment autoregressively. By determining segment length dynamically, RecoverSAT is capable of recovering from missing token errors and reducing repetitive token errors. By explicitly detecting and deleting repetitive segments, RecoverSAT is able to recover from repetitive token errors. Experiments on three widely-used benchmark datasets show that our RecoverSAT model maintains comparable performance with more than 4× decoding speedup compared with the AT model. Our RecoverSAT model utilizes the positional encoding method in where E token w is the token embedding vector of w. However, we can not apply this method to target tokens directly. Since lengths of segments are dynamically determined, the positions of the tokens in the target sentence, except those in the first segment, are not available during generation. To solve the problem, we use the aforementioned method to independently encode the position in the corresponding segment of each token instead and adopt an absolute segment embedding method, which uses a distinct trainable vector to represent the position of each segment. Formally, the input vector of the decoder for the n-th target token v of the j-th segment is computed as: where E seg j is the segment embedding vector for the segment position j. | 1,323 | 2,616 | 1,323 |
Factored Statistical Machine Translation for Grammatical Error Correction | This paper describes our ongoing work on grammatical error correction (GEC). Focusing on all possible error types in a real-life environment, we propose a factored statistical machine translation (SMT) model for this task. We consider error correction as a series of language translation problems guided by various linguistic information, as factors that influence translation results. Factors included in our study are morphological information, i.e. word stem, prefix, suffix, and Part-of-Speech (PoS) information. In addition, we also experimented with different combinations of translation models (TM), phrase-based and factor-based, trained on various datasets to boost the overall performance. Empirical results show that the proposed model yields an improvement of 32.54% over a baseline phrase-based SMT model. The system participated in the CoNLL 2014 shared task and achieved the 7 th and 5 th F 0.5 scores 1 on the official test set among the thirteen participating teams. | The task of grammatical error detection and correction (GEC) is to make use of computational methods to fix the mistakes in a written text. It is useful in two aspects. For a non-native English learner it may help to improve the grammatical quality of the written text. For a native speaker the tool may help to remedy mistakes automatically. Automatic 1 These two rankings are based on gold-standard edits without and with alternative answers, respectively. correction of grammatical errors is an active research topic, aiming at improving the writing process with the help of artificial intelligent techniques. Second language learning is a user group of particular interest. Recently, Helping Our Own (HOO) and CoNLL held a number of shared tasks on this topic In this paper, we propose a factored SMT model by taking into account not only the surface information contained in the sentence, but also morphological and syntactic clues (i.e., word stem, prefix, suffix and finer PoS information). To counter the sparsity problem we do not use artificial or manual approaches to enrich the training data. Instead we apply factored and transductive learning techniques to enhance the model on a small dataset. In addition, we also experimented with different combinations of translation models (TM), phrase-and factorbased, that are trained on different datasets to boost the overall performance. Empirical results show that the proposed model yields an improvement of 32.54% over a baseline phrasebased SMT model. The remainder of this paper is organized as follows: Section 2 describes our proposed methods. Section 3 reports on the design of our experiments. We discuss the result, including the official shared task results, in Section 4,. We summarize our conclusions in Section 5. | In contrast with phrase-based translation models, factored models make use of additional linguistic clues to guide the system such that it generates translated sentences in which morphological and syntactic constraints are met In order to construct a SMT model, we convert the training data into a parallel corpus where the problematic sentences that ought to be corrected are regarded as source sentences, while the reference sentences are treated as the corresponding target translations. We discovered that a number of sentences is absent at the target side due to incorrect annotations in the golden data. We removed these unparalleled sentences from the data. Secondly, the initial capitalizations of sentences are converted to their most probable casing using the Moses truecaser 2 . URLs are quite common in the corpus, but they are not useful for learning and even may cause the model to apply unnecessary correction on it. Thus, we mark all of the ULRs with XML markups, signaling the SMT decoder not to analyze an URL and output it as is. In this study we explore four different factors: prefix, suffix, stem, and PoS. This linguistic information not only helps to capture the local constraints of word morphologies and the interaction of adjacent words, but also helps to prevent data sparsity caused by inflected word variants and insufficient training data. Word stem: Instead of lemmas, we prefer word stemming as one of the factors, considering that stemming does not requires deep morphological analysis and is easier to obtain. Second, during the whole error detection and correction process, stemming information is used as auxiliary information in addition to the original word form. Third, for grammatical error correction using word lemmas or word stems in factored translation model shows no significant difference. This is because we are translating text of the same language, and the translation of this factor, stem or lemma, is straightforwardly captured by the model. Hence, we do not rely on the word lemma. In this work, we use the English Porter stemmer Prefix: The second type of morphological information we explored is the word prefix. Although a prefix does not present strong evidence to be useful to the grammatical error correction, we include it in our study in order to fully investigate all types of morphological information. We believe the prefix can be an important factor in the correction of initial capitalization, e.g. "In this era, engineering designs…" should be changed to "In this era, engineering designs…" In model construction, we take the first three letters of a word as its prefix. If the length of a word is less than three, we use the word as the prefix factor. Suffix: Suffix, one of the important factors, helps to capture the grammatical agreements between predicates and arguments within a 2 After decoding, we will de-truecase all these words. sentence. Particularly the endings of plural nouns and inflected verb variants are useful for the detection of agreement violations that shown up in word morphologies. Similar to how we represent the prefix, we are interested in the last three characters of a word. According to the description of factors, Figure constantli combin idea will result in better solut be formul Figure PoS: Part-of-Speech tags denote the morphosyntactic category of a word. The use of PoS sequences enables us to some extent to recover missing determiners, articles, prepositions, as well as the modal verb in a sentence. Empirical studies We intend to further boost the overall performance of the correction system by combining the strengths of individual models through model combination, and compare against the baseline. The systems compared here cover three pipelined models and a multi-factored model, as described earlier in Section 3. The combined systems include: 1) CSys suf+phrase : the combination of Sys +suf and the baseline phrasebased translation model; 2) CSys suf+suf : we combine two similar factored models with suffix factors, Sys +suf , which is trained on the same corpus; and 3) TSys suf+phrase : similar to CSys suf+phrase , but the training data for the second phrase-based model is augmented by adding the output sentences from the previous model (paired with the correct sentences). Our intention is to enlarge the size of the training data. The evaluation results are presented in Table We pre-process the NUCLE corpus In total, one baseline system, five individual systems, and four combination systems are evaluated in this study. The baseline system (Baseline) is trained on the words-only corpus using a phrase-based translation model. For the individual systems we adopt the factored translation model that are trained respectively on 1) surface and stem factors (Sys +stem ), 2) surface and suffix factors (Sys +suf ), 3) surface and prefix factors (Sys +pref ), 4) surface and PoS factors (Sys +PoS ), and 5) surface and modified-PoS factors (Sys +MPoS ). The combination systems include: 1) the combination of "factored + phrase-based" and "factored + factored" for models cascading; and 2) the factors of surface, stem and modified-PoS (Sys +stem+MPoS ) are combined for constructing a correction system based on a multi-factor model. We report our results in terms of the precision, recall and F 0.5 obtained by each of the individual models and combined models. Table After fully evaluating the designed individual models as well as the integrated ones, we adopt Sys +MPoS as our designated system for this grammatical error correction task. The official test set consists of 50 essays, and 2,203 errors. Table Table This paper describes our proposed grammatical error detection and correction system based on a factored statistical machine translation approach. We have investigated the effectiveness of models trained with different linguistic information sources, namely morphological clues and syntactic PoS information. In addition, we also explore some ways to combine different models in the system to tackle the correction problem. The constructed models are compared against the baseline model, a phrase-based translation model. Results show that PoS information is a very effective factor, and the model trained with this factor outperforms the others. One difficulty of this year's shared task is that participants have to tackle all 28 types of errors, which is five times more than last year. From the results, it is obvious there are still many rooms for improving the current system. | 983 | 1,785 | 983 |
PAIRSPANBERT: An Enhanced Language Model for Bridging Resolution | We present PAIRSPANBERT, a SPANBERTbased pre-trained model specialized for bridging resolution. PAIRSPANBERT is pre-trained with a novel objective that aims to learn the contexts in which two mentions are implicitly linked to each other from a large amount of data automatically generated either heuristically or via distance supervision with a knowledge graph. Despite the noise inherent in the automatically generated data, we achieve the best results reported to date on three evaluation datasets for bridging resolution when replacing SPANBERT with PAIRSPANBERT in a stateof-the-art resolver that jointly performs entity coreference resolution and bridging resolution. | Bridging is essential for establishing coherence among the entities within a text through nonidentical semantic or encyclopedic relations (1) In June, farmers held onto meat, milk and grain, waiting for July's usual state directed price rises. The Communists froze prices instead. The task of bridging resolution, which involves identifying all the bridging anaphors in a text and linking them to their antecedents, is crucial for machine comprehension of the relations between discourse entities for various downstream applications, such as question answering The most successful natural language learning paradigm to date is arguably the "pre-train and finetune" paradigm, where a model is first pre-trained on very large amounts of data in a task-agnostic, self-supervised manner and then fine-tuned using a potentially small amount of task-specific training data in the usual supervised manner. This paradigm is ideally applicable to bridging resolution, where the amount of annotated training data is relatively small, especially in comparison to the related task of entity coreference resolution. In fact, by using SPANBERT A natural question is: how can we build upon the successes of this pre-train and fine-tune framework for bridging resolution? Apart from achieving stateof-the-art results, Motivated by this observation, we design a novel pre-training objective for bridging resolution that allows a model to learn the contexts in which two mentions are implicitly linked to each other. We subsequently use our objective to further pre-train SPANBERT in combination with its original objectives, yielding PAIRSPANBERT, a pre-trained model that is specialized for bridging resolution. Note that an important factor that contributes to the success of pre-training is the sheer amount of data on which the model is pre-trained: since pretraining tasks are designed to be self-supervised learning tasks, a very large amount of annotated training data can be automatically generated, thus allowing the model to potentially acquire a lot of linguistic and commonsense knowledge. To enable our model to learn the contexts that are indicative of bridging, we employ a large amount of data that can be automatically generated either heuristically While the vast majority of existing bridging resolvers are evaluated in the rather unrealistic setting where gold mentions are assumed as input, we follow | Bridging resolution. The two sub-tasks of bridging resolution, namely bridging anaphora recognition and bridging anaphora resolution, have been tackled separately. One line of research has modeled bridging anaphora recognition as a part of the information status (IS) classification problem where each discourse entity is assigned an IS category, with bridging being one of the categories Recent studies have begun tackling bridging resolution and its sub-tasks in the end-to-end setting. For example, Hou (2021) uses a combination of neural mention extraction and IS classification models for bridging anaphora recognition. Furthermore, Enhanced pre-trained language models. BERT State-of-the-art results on ISNotes and BASHI are reported in The Bakersfield Supermarket went … The business closed when its old … … The murder saddened the customers … Coref. Coref. Brid. BERT for bridging resolution, and eventually replace SPANBERT with PAIRSPANBERT in the MTL framework, in this section we present Y&P's MTL framework (Section 3.1), Kobayashi et al.'s extensions to the framework (Section 3.2), and the inner workings of SPANBERT (Section 3.3). Y&P's model takes as input a document D represented as a sequence of word tokens and the associated set of mentions (which can be gold mentions or automatically extracted mentions), and performs joint bridging resolution and coreference resolution, which we define below, in a MTL framework. The bridging resolution task involves assigning span i an antecedent y b ∈ {1, ..., i -1, ϵ}, where the value of y b is the id of span i's antecedent, which can be a dummy antecedent ϵ (i.e., i is not anaphoric) or one of the preceding spans. Y&P define the following scoring function: where s a (i, j) is a pairwise bridging score that indicates how likely span i refers to a preceding span j. The model predicts the antecedent of i to be y * b = arg max j∈Y b (i) s b (i, j), where Y b (i) is the set of candidate antecedents of i. The entity coreference resolution task involves identifying the entity mentions that refer to the same real-world entity. Specifically, the goal is to find an antecedent for each span using a scoring function that can be defined in a similar way as the s b function in the bridging resolution task. Figure Span Representation Layer To encode the tokens and the surrounding contexts of a gold mention, Y&P use a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) that takes as input the BERT and GloVe embeddings. They define g i , the representation of span i, as [x start(i) ; x end(i) ; x head(i) ; ϕ i ], where x start(i) and x end(i) are the hidden vectors of the start and end tokens of i, x head(i) is an attention-based head vector and ϕ i is a span width feature embedding. Bridging Prediction Layer To predict bridging links, Y&P first calculate the pairwise score between spans i and j as follows: where FFNN b (•) represents a standard feedforward neural network, and • denotes element-wise multiplication. This pairwise score includes g i • g j , which encodes the similarity of i and j, and ψ ij , which denotes the distance between them. Coreference Prediction Layer To predict coreference links, Y&P calculate a pairwise score between two spans that is defined analogously as in Equation 2 using another FFNN, FFNN c . The model shares the first few hidden layers of FFNN b and FFNN c as well as the span representations. where α is a positive constant that controls the impact of the rule information on s ′ b . The model then uses s ′ b (i, j) to rank the candidate antecedents of span i. Note that (1) if no rule posits i and j as bridging, r(i, j) is 0; (2) rule precision is computed on the training set; and (3) α is tuned on the development set. The loss function is the weighted sum of the losses of the bridging task (L b ) and the coreference task (L c ). L b and L c are defined as the negative marginal log-likelihood of all correct bridging antecedents and coreference antecedents, respectively. The weights associated with the losses are tuned using grid search to maximize the average bridging resolution F-scores on development data. The SPANBERT pre-trained model is an extension of BERT aimed at better learning of the representations of text spans. 1 Like BERT, SPAN-BERT takes as input a sequence of subword tokens T = [t 1 , ..., t n ] and produces a sequence of contextualized vector representations T = [t 1 , ..., t n ]. Unlike BERT, which randomly selects individual tokens for masking (where each token selected for masking is replaced with a special [M ASK] token), SPANBERT employs a span masking scheme where spans of tokens are masked in order to better learn span representations. SPANBERT employs two pre-training objectives: Masked Language Modeling (MLM) Given a masked span consisting of contiguous tokens (t s , ..., t e ), the model is asked to predict for each masked token t i in the span the original token using t i . The MLM loss, L M LM , is the cross entropy loss. Span Boundary Objective (SBO) Given a masked span consisting of contiguous tokens (t s , ..., t e ), the model is asked to predict for each token t i in the masked span the original token using the contextualized vectors of two tokens, namely the token to the left of the span boundary and the one to the right of its span boundary (i.e., t s-1 and t e+1 ), as well as the position embedding of the target token p i . The SBO loss, L SBO , is the cross-entropy loss. Figure Next, we present PAIRSPANBERT, an extension of SPANBERT specialized for bridging resolution. To create PAIRSPANBERT, we use SPANBERT as a starting point and add a pre-training step to it that would enable the model to learn the contexts in which two mentions are implicitly linked to each 1 Although SPANBERT is often viewed as an extension of BERT, not everything in BERT appears in SPANBERT. For example, while BERT is pre-trained on the so-called next sentence prediction (NSP) task, SPANBERT is not. Figure 2: An illustration of the masking scheme and the objectives in SPANBERT. Span masking masks all the subword tokens in the span "severe food restriction". Given the masked token "food", MLM makes predictions based on the contextualized vector t 5 , whereas SBO makes predictions based on the external boundary tokens of the masked span, t 3 and t 7 , as well as the position embedding p 2 , which indicates that "food" is the second token after t 3 . other from data that is automatically generated either heuristically or via distant supervision with the help of a knowledge graph. To do so, we will describe how we obtain automatically generated data (Section 4.1), the masking scheme (Section 4.2), and the pre-training task (Section 4.3). We aim to collect automatically labeled data that would enable the model to learn the contexts in which two mentions are implicitly linked. As noted in the introduction, a pre-training task tends to be more effective for improving a target task (which in our case is bridging resolution) if the pre-training task resembles the target task. Hence, we seek to collect automatically labeled data in which the two implicitly linked mentions are likely to have a bridging relation. We begin by (1) collecting noun pairs that are likely involved in a bridging relation in a context-independent manner, and then (2) using these pairs to automatically label sentences. We obtain noun pairs that are likely to be involved in a bridging relation heuristically (via the syntactic structures of noun phrases (NPs)) and via distance supervision (with ConceptNet), as described below. Syntactic Structures of NPs Following Hou (2018b), we extract noun pairs from the automatically parsed Gigaword corpus ConceptNet Next, we show how to extract noun pairs that are likely involved in a bridging relation from ConceptNet ConceptNet is a knowledge graph that connects phrases with labeled edges. It is built on various sources such as Open Mind Common Sense Since not all ConceptNet relations are useful for bridging resolution, we empirically identify the useful relations w.r.t. each evaluation dataset (e.g., ISNotes) as follows. First, for each ConceptNet relation type r, we apply the noun pairs extracted from r (see the previous paragraph) to the training portion of the dataset, positing a bridging link between two nouns in a training document if (1) their heads are related according to r and (2) they appear within two sentences of each other. Then, we compute a bridging resolution F-score w.r.t. r using the resulting bridging links. Finally, we sort the relation types in decreasing order of F-score and retain the top k relation types that collectively maximize the bridging resolution F-score on the training set. Only the noun pairs that are related through the selected relation types will be used to create automatically labeled data. The ConceptNet relation types selected for the three datasets (ISNotes, BASHI, ARRAU RST) can be found in Appendix A. The relation types that are used in all three datasets include RELAT-EDTO, SYNONYM, HASA, ISA, ATLOCATION, CAPABLEOF, and PARTOF. Intuitively, all of these relation types are closely related to bridging. The success of pre-training stems in part from learning from very large amounts of labeled data. Automatic generation of labeled data will enable us to easily generate a large amount of (noisily) labeled data and allow the model to learn a variety of contexts in which two mentions are likely to have a bridging relation. In this subsection, we describe how we create automatically labeled instances, each of which is composed of one of the noun pairs collected in the previous subsection (through syntactic structures or ConceptNet) and the surrounding context. For each document in parsed Gigaword, we automatically posit a bridging link between two nouns if two conditions are satisfied. First, they appear in one of the noun pairs collected in the previous subsection. Second, they are no more than two sentences apart from each other (this is motivated by the observation that bridging links typically appear in a two-sentence window). There is a small caveat, however. Recall that the two nouns in a noun pair (X, Y) extracted from the syntactic structures play an asymmetric role, where X is an anaphor and Y is its antecedent. So, when applying the first condition to the pairs collected from the syntactic structures, we consider the condition satisfied only if X appears after Y in the associated document. For the noun pairs collected from ConceptNet, we do not have such a restriction since we do not mark which noun is the anaphor and which noun is the antecedent for each ConceptNet relation type. Using the method described in the previous subsection, we will be able to automatically annotate each Gigaword document with bridging links. Next, we describe the two masking schemes we employ in PAIRSPANBERT, based on which we will define the pre-training tasks to predict the masked tokens in the next subsection. PAIRSPANBERT assumes as input a segment of up to 512 tokens (which in our case is taken from an automatically annotated Gigaword document). We define two masking schemes to mask the tokens in a given segment. First, we employ span masking, as described in the SBO task in Section 3.3 where randomly selected spans of tokens are replaced with the [M ASK] tokens. This masking strategy does not rely on the automatically identified bridging relations. Second, we define an anchor masking strategy, where we randomly choose the antecedents (i.e., anchors) in our automatically identified bridging relations and replace each (subword) token in each selected antecedent with the [M ASK] token. We consider both masking schemes important for PAIRSPANBERT. As bridging resolution involves identifying relations between spans, span masking will ensure that the model learns good span representations. In contrast, anchor masking is designed to eventually enable the model to learn the contexts in which two nouns are likely involved in a bridging relation. Following previous work PAIRSPANBERT employs three pre-training tasks, MLM, SBO, and Associative Noun Objective (ANO). The MLM and SBO tasks are the same as those used in SPANBERT (see Section 3.3): we apply them to predict the tokens masked by both span masking and anchor masking. ANO is a novel pre-training task we define specifically to enable the model to learn knowledge of bridging. Unlike MLM and SBO, which we apply to the masked tokens produced by both masking schemes, ANO is applicable only to the masked tokens produced by anchor masking. Specifically, given a sequence of input tokens T = [t 1 , ..., t n ] and a masked anchor anc consisting of subword tokens (t s1 , ..., t e1 ), the goal of ANO is to predict an anaphor ana consisting of subword tokens (t s2 , ..., t e2 ). 2 The probability that ana is associ-2 An anchor may be associated with more than one anaphor. Given the masked anchor "company", L AN O calculates the probability that "office" is associated with "company" using the contextualized vectors of the start and end subword tokens of (masked) "company" and "office", t 2 and t 7 , according to Equation We calculate the probability of token t i given token t j in the sequence T using the contextualized vectors T = [t 1 , ..., t n ] produced by SPANBERT. where s(t i , t j ), the similarity of t i and t j , is computed as (w • t i ) • t j , w is a trainable vector of parameters, • is the dot product, and • is elementwise multiplication. Figure Finally, we compute the loss for PAIRSPAN-BERT L as the sum of the losses of its three pretraining objectives. 5 Evaluation Corpora. For evaluation, we employ three commonly used corpora for bridging resolution, namely Baseline systems. We employ five baselines. The first baseline is a state-of-the-art rule-based approach by The remaining baselines are all SPANBERTbased. The third and fourth baselines are the stateof-the-art SPANBERT-based resolver and its hybrid version introduced in Section 3.2 (denoted as SBERT and SBERT(R) respectively in Table details. To pre-train PAIRSPANBERT, we initialize it with the SPANBERT-large checkpoint and continue pretraining on the Gigaword documents automatically labeled with bridging links. Recall that these links are created using the noun pairs extracted from two sources: syntactic structures and ConceptNet. Rather than always use both sources to create bridging links, we use dev data to determine whether we should use one (and if so, which one) or both of them. We optimize PAIRSPANBERT using Adam (Kingma and Ba, 2014) for 4k steps with a batch size of 2048 through gradient accumulation, a maximum learning rate of 1e-4, and a linear warmup of 400 steps followed by a linear decay of the learning rate. The remaining parameters are the same as those in SPANBERT. Pre-training is performed on a machine with four A100 GPUs and lasts for a day. We fine-tune both SPANBERT and PAIRSPAN-BERT for up to 400 epochs with Adam (Kingma and Ba, 2014) in each dataset, with early stopping based on the development set. The version of SPANBERT we use is SPANBERT-large. The learning rates for SPANBERT and PAIRSPAN-BERT are searched out of {1e-5, 2e-5, 3e-5}, while the task learning rates are searched out of {1e-4, 2e-4, 3e-4, 4e-4}. We split each document into segments of length 384. Each model consid-ers up to the K closest preceding candidate antecedents. We search K out of {50, 80, 100, 120, 150}. We search the weight parameter for the rule score out of {50, 100, 150, 200}. Following End-to-end setting. The top half of each subtable in Table The best resolution F-score is achieved by PS-BERT(R), which is created by replacing SPAN-BERT with PAIRSPANBERT in SBERT(R), on ISNotes and BASHI and by PSBERT, which is created by replacing SPANBERT with PAIRSPAN-BERT in SBERT, on ARRAU RST. PAIRSPAN-BERT considerably improves the best baseline in resolution F-score by 2.3 points on ISNotes, 1.3 points on BASHI, and 1.5 points on ARRAU RST. PAIRSPANBERT's recognition F-scores are also generally higher than those of the SPANBERTbased resolvers. Although the noun pairs fail to improve SBERT when used as features, our results show that using these noun pairs to create automatically labeled data for pre-training is a better method to exploit such noisy information. Overall, we manage to achieve the best results to date on the three datasets using either PSBERT or PSBERT(R). Gold mention setting. Results for the gold mention setting are shown in the bottom half of each subtable in Table We conclude this subsection with two points that we believe deserve mention. First, all the PAIRSPANBERT results reported in Table Error analysis of the best end-to-end models. We conduct an error analysis of our top-performing end-to-end models, PSBERT(R) for ISNotes and BASHI and PSBERT for ARRAU RST, to gain additional insights into them. Overall, it appears that these models struggle to recognize the majority of the bridging anaphors, with the recall scores ranging between 25.6% and 39.5% on the three datasets. In addition, only a small percentage of the recall errors in bridging anaphora recognition are due to mention prediction errors: 3%, 1.3%, and 2% of the gold bridging anaphors are misclassified as non-mentions in ISNotes, BASHI, and ARRAU RST, respectively. These models consistently make more recall errors at identifying definite bridging anaphors (i.e., NPs modified by the definite article "the") than other bridging anaphors across all datasets. For instance, on ISNotes, the recall scores of identifying definite bridging anaphors and other bridging anaphors are 31% and 45%, respectively. Next, we analyze the precision errors on ISNotes and ARRAU RST, as BASHI does not have mention annotations. Mention prediction errors (i.e., misclassifying non-mentions as bridging anaphors) account for 8.7% and 10.9% of the precision errors on ISNotes and ARRAU RST, respectively. On ISnotes, the majority of the precision errors are caused by misclassifying new and old mentions as bridging anaphors, accounting for 43% and 25% of the precision errors, respectively. On ARRAU RST, 71% of the precision errors are due to new mentions being misclassified as bridging anaphors. These findings corroborate the results reported in previous research on bridging recognition Comparison of PSBERT(R) and SBERT(R) on ISNotes and BASHI. We further compare our best end-to-end resolver, PSBERT(R), with the previous state-of-the-art resolver, SBERT(R). On ISNotes, PSBERT(R) predicts 35% more bridging pairs than SBERT(R), resulting in a higher recall for recognizing bridging anaphors (39.5% vs. 31.6%). Overall, PSBERT(R) is better than SBERT(R) at predicting bridging pairs in which the bridging anaphors are not modified by any determiners (i.e., bare NPs), such as "guests" or "walls". On BASHI, however, the trend is the opposite. PS-BERT(R) predicts 18% less bridging pairs than SBERT(R) but achieves a higher precision score for bridging anaphora recognition (43.0% vs. 36.0%). Comparison of PSBERT and SBERT on AR-RAU RST. On ARRAU RST, we compare PS-BERT with SBERT in the end-to-end setting. Both models predict a similar number of bridging pairs, but PSBERT achieves a higher precision for bridging anaphor recognition (31.1% vs. 29.7%). We observe that PSBERT is better than SBERT at recognizing bridging anaphors that are bare NPs, especially proper names such as "Seoul". We designed a novel pre-training task for bridging resolution using automatically annotated documents that contain noun pairs that are likely to be linked via implicit relations, and demonstrated that our newly pre-trained model, PAIRSPANBERT In future work, we plan to apply PAIRSPAN-BERT to other language processing tasks, particularly relation extraction tasks, since the noun pairs extracted from the syntactic structures and Con-ceptNet are likely to have non-identical relations. When evaluating the resolvers in the gold mention setting, we use the "harsh" evaluation method that is also employed in some previous work (e.g., A category of precision errors arises from erroneously identified mentions. For example, an endto-end rule (wrongly) posits "federal district court in Dallas" and "the Fifth U.S. Circuit Court" as having a bridging relation, but "the Fifth U.S. Circuit Court" is not a gold mention. The rules in the end-to-end setting underperform their counterparts in the gold mention setting by 5.3% in recognition precision and by 4.1% in resolution precision. Recall from Section 4.1.1 that we collect noun pairs from both the syntactic structures and ConceptNet, which are subsequently applied to the Gigaword documents to automatically annotate them with bridging relations (Section 4.1.2). Table It is worth mentioning that the results of Rules(R) for the gold mention setting in End-to-End Setting SBERT(R) 35.1 23.9 31.2 17.0 24.8 14.8 CSBERT(R) 34.4 23.6 30.8 16.7 24.0 14.9 Gold Mention Setting SBERT(R) 38.6 26.8 32.6 18.7 29.4 20.1 CSBERT(R) 37.4 26.9 31.9 18.5 30.0 20.3 Table One may argue that the comparison between PAIRSPANBERT and SPANBERT in our experiments is not entirely fair. Specifically, PAIRSPAN-BERT may have an unfair advantage over SPAN-BERT because it is pre-trained for more epochs than SPANBERT. To investigate whether the performance improvement of PAIRSPANBERT stems from the additional pre-training steps, we conduct an experiment to determine if SBERT(R) can be improved with additional pre-training. Specifically, we additionally pre-train SBERT(R) using MLM and SBO on the same dataset as PAIRSPANBERT for as many epochs as we pre-train PAIRSPAN-BERT. Table | 672 | 2,404 | 672 |
Style Transfer as Data Augmentation: A Case Study on Named Entity Recognition | In this work, we take the named entity recognition task in the English language as a case study and explore style transfer as a data augmentation method to increase the size and diversity of training data in low-resource scenarios. We propose a new method to effectively transform the text from a high-resource domain to a low-resource domain by changing its style-related attributes to generate synthetic data for training. Moreover, we design a constrained decoding algorithm along with a set of key ingredients for data selection to guarantee the generation of valid and coherent data. Experiments and analysis on five different domain pairs under different data regimes demonstrate that our approach can significantly improve results compared to current state-of-the-art data augmentation methods. Our approach is a practical solution to data scarcity, and we expect it to be applicable to other NLP tasks. 1 | Large-scale pre-trained language models (PLMs) such as BERT Data augmentation is effective in addressing data scarcity. Previous work In this work, we explore the potential of employing style transfer as a way of data augmentation in cross-domain settings. Style transfer on natural language aims to change the style-related attributes of text while preserving its semantics, which makes it a reasonable alternative for the purpose of data augmentation. Here, we take the named entity recognition (NER) task as a case study to investigate its effectiveness. The general pipeline is shown in Figure We formulate style transfer as a paraphrase generation problem following previous work In summary, our contributions are as follows: 1. We propose a novel approach for data augmentation that leverages style transfer to im-prove the low-resource NER task and show that our approach can consistently outperform previous state-of-the-art methods in different data regimes. 2. We present a constrained decoding algorithm along with a set of key ingredients to guarantee the generation of valid and coherent data. 3. Our proposed solution is practical and can be expected to be applicable to other lowresource NLP tasks as it does not require any task-specific attributes. | Style Transfer Style transfer aims to adjust the stylistic characteristics of a sentence while preserving its original meaning. It has been widely studied in both supervised To facilitate research in this direction, we study style transfer as data augmentation and propose a novel approach to explore transferring the stylerelated attributes of text to add more variations in the training data. Data Augmentation Data augmentation has been recently receiving increasing and widespread attention in the field of NER. The mainstream methods can be group into two categories: rulebased approaches Considering a nonparallel NER dataset D consisting of source data D src from a high-resource domain and target data D tgt from a low-resource domain, training a model directly on D src and evaluating on D tgt is expected to give low prediction performance due to the stylistic differences (e.g., lexicons and syntax) between domains. In this work, we transform the data from a source domain to a target domain by changing text style and use the resulting transformed data to improve NER performance on D tgt . To this end, we assume access to a dataset P that contains pairs of parallel source and target sentences, respectively, to provide supervision signals for style transfer and a pre-trained generative model G θ based on an encoder-decoder where ŷ = {ŷ 1 , ŷ2 , ..., ŷM } is the generated sentence of length M that has the same meaning as x but a different style. Data Preparation Applying pre-trained generative models requires converting NER tokens and labels into a linearized format. In this work, we assume that the NER dataset follows the standard BIO labeling schema To mitigate this issue, we propose to linearize the data by only adding <START_ENTITY_TYPE> and <END_ENTITY_TYPE> special tokens to the beginning and the end of each entity span. For instance, the sample "A rainy day in [New] B-LOC [York] I-LOC " will be converted to "A rainy day in <START_LOC> New York <END_LOC>". Furthermore, we also prepend task prefixes to the beginning of the sentences to guide the direction of style transfer, where a prefix is a sequence of words for task description specifying the source and target styles of transfer (e.g., "transfer source to target: "). In this work, we explore style transfer as a data augmentation method to improve the performance of NER systems on low-resource domains. We propose an adversarial learning framework to generate a paraphrase of the source data in a highresource domain whose style conforms to the target data in a low-resource domain. The proposed framework comprises two main components: (1) paraphrase generation, which aims to rephrase the sentence to a different style with supervision, and (2) cycle-consistent reconstruction, which aims to transfer the sentence to a different style and then back to its original style with no supervision. The overall architecture is shown in Figure Paraphrase Generation (PG) Recent work Cycle-consistent Reconstruction (CR) Considering a nonparallel NER dataset consisting of data from D src and D tgt in the source and target styles, respectively, we aim to change the style of text as a way to augment data. Previous mechanism enables the generator G θ to learn different mapping functions, i.e., G θ (x src ) → ŷtgt and G θ (x tgt ) → ŷsrc . Intuitively, the learned mapping functions should be reverses of each other. The sentence transferred by one mapping function should be able to be transferred back to its original representation using the other mapping function. Such cycle-consistency can not only encourage content preservation between the input and output, but also reduce the search space of mapping functions To this end, as shown in Figure where π are class probabilities for tokens and |V | denotes the size of vocabulary. g are i.i.d. samples drawn from Gumbel(0, 1) distribution and τ is the temperature hyperparameter Additionally, the generator G θ is adversarially trained with a style discriminator D θ , which takes as the input the latent representations of either the input sentence or its paraphrase, and discriminates the input between the source and target styles: Overall Training Objective The overall training objective can be formalized as: where λ pg , λ cr , and λ adv reflect the relative importance of L pg , L cr , and L adv , respectively. The training process begins with the paraphrase generation as the first stage: the generator G θ is trained with the paraphrase generation objective while the discriminator D θ is trained with the adversarial learning objective. In the second stage, both paraphrase generation and cycle-consistent reconstruction are involved: the cycle-consistent reconstruction objective is further incorporated to train the generator G θ . Constrained Decoding When given the input sequence x and the already generated tokens {y 1 , y 2 , ..., y i-1 }, a straightforward approach to generate the next word y i is to greedily select the word with the highest probability at the timestep i. However, this greedy decoding approach may result in the sub-optimal decision and cannot guarantee to generate valid NER samples (e.g, mismatch of entity types and incomplete sentences). To address these issues, we propose to apply a constrained decoding algorithm based on prefix trees Data Selection Even with a valid structure, the generated sentences may still remain unreliable as it may have a low quality due to degenerate repetition and incoherent gibberish • Consistency: a confidence score from a pretrained style classifier as the extent a generated sentence is in the target style. • Adequacy: a confidence score from a pretrained NLU model on how much semantics is preserved in the generated sentence. • Fluency: a confidence score from a pretrained NLU model indicating the fluency of the generated sentence. • Diversity: the edit distance between original sentences and the generated sentences at the character level. For each sentence, we over-generate k=10 candidates. We calculate the above metrics (see Appendix C for more details) and assign a weighted score of these metrics to each candidate. Then we use the score to rank all candidates and select the best one for training NER systems. In this section, we present the experimental setup and results. We extensively evaluate our proposed method on five different domain pairs and compare our proposed method with state-of-the-art systems on data augmentation for NER in crossdomain scenarios. Datasets We focus exclusively on formality style transfer which aims to transfer formal sentences to informal sentences. We use the parallel style transfer data from the GYAFC Base Models For style transfer, we use a pretrained T5 base model For NER, we use a sequence labeling framework consisting of a pre-trained BERT base model Data Regimes To better understand the effectiveness of our proposed method, we undertake experiments in three different data regimes by varying the amount of training data in the target domain, namely FEW-SHOT, LOW-RESOURCE, and FULL-SET scenarios. For all scenarios, we assume full access to P and D src but different access to D tgt . In the FEW-SHOT scenario, we adopt a Nway K ∼ 2K-shot setting following We investigate the following methods on data augmentation for NER in cross-domain scenarios for comparison: (1) Adaptive Data Augmentation (ADA) Using Same Amount of Pseudo Data Here, we randomly select 1K, 2K, 3K, and 4K sam- Using Large Amount of Pseudo Data Theoretically, we could generate an infinite amount of pseudo data for training. Thus, we undertake experiments using more pseudo data combined with target data for training. Here, we make comparison with three different method to support the effectiveness of our proposed method: (1) S + T: fine-tune on source and target data together, (2) T only: fine-tune on only target data, and (3) S → T: fine-tune on source data first and then target data. For our proposed method P + T, we gradually increase the number of generated samples combined with target data as for training. We present the results in Figure We also notice a similar phenomenon while only using a few amounts of samples combined with target samples. We argue that, in such cases, the model can have an inductive bias towards memorization instead of generalization, and thus perform poorly on test data. Additionally, fine-tuning on the source data first and then target data S → T can achieve better results than simply fine-tuning on source and target data together S + T or only target data T only. Nevertheless, with more and more generated samples for training, our proposed method can significantly boost the model performance comparing against other methods in four domain pairs (BC, BN, MZ, and NW) while remains very competitive in WB. our proposed method to transfer the knowledge between domains. Without this component, the F1 score decreases by 39.15% on average. Additionally, constrained decoding also plays an important role in avoiding label mistakes, which can significantly hurt model performance. Moreover, the style discriminator is effective to enable the model to find generalized patterns while data selection can further boost the model performance. Overall, the ablation results demonstrate that each strategy in our proposed method is crucial in achieving promising results. Case Study Table In this paper, we propose an effective approach to employ style transfer as a data augmentation method. Specifically, we present an adversarial learning framework to bridge the gap between different text styles and transfer the entity knowledge from high-resource domains to low-resource domains. Additionally, to guarantee the generation of valid and coherent data, we design a constrained decoding algorithm along with a set of key ingredients for data generation and selection. We undertake experiments on five different domain pairs. The experimental results show that our proposed method can effectively transfer the data across domains to increase the size and diversity of training data for low-resource NER tasks. For the future work, we plan to explore style transfer as data augmentation in cross-lingual settings (i.e., transfer entity knowledge across languages instead of just domains). Additionally, since our approach is based on pre-trained language models, it would be interesting to explore leveraging pre-trained knowledge for data augmentation. Based on our studies, we find the following main limitations: (1) mismatch of annotation schema: we observe that the annotation schema between some NER datasets conflict with each other. The same entity span can be labeled into different entity types. For example, "America" is an instance of GPE in the OntoNotes 5.0 dataset but LOCA-TION in the WNUT17 dataset. This phenomenon introduces noise and make it difficult for models to understand entity types and learn transformations. (2) mismatch of labeling schema: the labeling schema in different NER datasets can be very different. For instance, OntoNotes 5.0 dataset contains 18 coarse-grained entity types while FEW-NERD contains 9 coarse-grained and 66 fine-grained entity types. Using such datasets as source and target data may not lead to a significant improvement gains. We hope our findings can inform potential avenues of improvement on data augmentation for NER and inspire the further work in this research direction. Due to the limitation of computational resources, we set a max length of 64 to filter out long linearized sentences in both style transfer dataset P and NER dataset D for training the proposed framework to generate pseudo data. We also uses a pre-trained BERT large model We simply fine-tune a T5 base model D Hyper-parameters and Fine-tuning Table | 912 | 1,265 | 912 |
A Bayesian mixture model for term re-occurrence and burstiness | This paper proposes a model for term reoccurrence in a text collection based on the gaps between successive occurrences of a term. These gaps are modeled using a mixture of exponential distributions. Parameter estimation is based on a Bayesian framework that allows us to fit a flexible model. The model provides measures of a term's re-occurrence rate and withindocument burstiness. The model works for all kinds of terms, be it rare content word, medium frequency term or frequent function word. A measure is proposed to account for the term's importance based on its distribution pattern in the corpus. | Traditionally, Information Retrieval (IR) and Statistical Natural Language Processing (NLP) applications have been based on the "bag of words" model. This model assumes term independence and homogeneity of the text and document under consideration, i.e. the terms in a document are all assumed to be distributed homogeneously. This immediately leads to the Vector Space representation of text. The immense popularity of this model is due to the ease with which mathematical and statistical techniques can be applied to it. The model assumes that once a term occurs in a document, its overall frequency in the entire document is the only useful measure that associates a term with a document. It does not take into consideration whether the term occurred in the beginning, middle or end of the document. Neither does it consider whether the term occurs many times in close succession or whether it occurs uniformly throughout the document. It also assumes that additional positional information does not provide any extra leverage to the performance of the NLP and IR applications based on it. This assumption has been shown to be wrong in certain applications Existing models for term distribution are based on the above assumption, so they can merely estimate the term's frequency in a document or a term's topical behavior for a content term. The occurrence of a content word is classified as topical or non-topical based on whether it occurs once or many times in the document In this paper we describe a model for term reoccurrence in text based on the gaps between successive occurrences of the term and the position of its first occurrence in a document. The gaps are modeled by a mixture of exponential distributions. Nonoccurrence of a term in a document is modeled by the statistical concept of censoring, which states that the event of observing a certain term is censored at the end of the document, i.e. the document length. The modeling is done in a Bayesian framework. The organization of the paper is as follows. In section 2 we discuss existing term distribution models, the issue of burstiness and some other work that demonstrates the failure of the "bag of words" as-sumption. In section 3 we describe our mixture model, the issue of censoring and the Bayesian formulation of the model. Section 4 describes the Bayesian estimation theory and methodology. In section 5 we talk about ways of drawing inferences from our model, present parameter estimates on some chosen terms and present case studies for a few selected terms. We discuss our conclusions and suggest directions for future work in section 6. | Previous attempts to model a term's distribution pattern have been based on the Poisson distribution. If the number of occurrences of a term in a document is denoted by k, then the model assumes: for k = 0, 1, 2, . . . Estimates based on this model are good for non-content, non-informative terms, but not for the more informative content terms The two-Poisson model is suggested as a variation of the Poisson distribution where α and (1α) denote the probabilities of a document in each of these classes. Often this model under-estimates the probability that a term will occur exactly twice in a document. Burstiness is a phenomenon of content words, whereby they are likely to occur again in a text after they have occurred once. • the probability that a term occurs in a document at all (document frequency) • the probability that it will occur a second time in a document given that it has occurred once • the probability that it will occur another time, given that it has already occurred k times (where k > 1). The drawbacks of this model are: (a) it cannot handle non-occurrence of a term in a document; (b) the model can handle only content terms, and is not suitable for high frequency function words or medium frequency terms; and (c) the rate of re-occurrence of the term or the length of gaps cannot be accounted for. We overcome these drawbacks in our model. A measure of burstiness was proposed as a binary value that is based on the magnitude of average-term frequency of the term in the corpus The popular "bag of words" assumption for text states that a term's occurrence is uniform and homogeneous throughout. A measure of homogeneity or self-similarity of a corpus can be calculated, by dividing the corpus into two frequency lists based on the term frequency and then calculating the χ 2 statistic between them We build a single model for a particular term in a given corpus. Let us suppose the term under consideration is x as shown in Figure • w i1 denotes the position of the first occurrence of term x in document i. • w i2 , . . . , w in i denotes the successive gaps between occurrences of term x in document i. • w in i +1 denotes the gap for the next occurrence of x, somewhere after the document ends. • cen i is the value at which observation w in i +1 is censored, as explained in section 3.2.2. We suppose we are looking through a document, noting when the term of interest occurs. Our model assumes that the term occurs at some low underlying base rate 1/λ 1 but, after the term has occurred, then the probability of it occurring soon afterwards is increased to some higher rate 1/λ 2 . Specifically, the rate of re-occurrence is modeled by a mixture of two exponential distributions. Each of the exponential components is described as follows: • The exponential component with larger mean (average), 1/λ 1 , determines the rate with which the particular term will occur if it has not occurred before or it has not occurred recently. • The second component with smaller mean (average), 1/λ 2 , determines the rate of reoccurrence in a document or text chunk given that it has already occurred recently. This component captures the bursty nature of the term in the text (or document) i.e. the within-document burstiness. The mixture model is described as follows: for j ∈ {2, . . . , n i }. p and (1p) denote respectively, the probabilities of membership for the first and the second exponential distribution. There are a few boundary conditions that the model is expected to handle. We take each of these cases and discuss them briefly: The model treats the first occurrence of a term differently from the other gaps. The second exponential component measuring burstiness does not feature in it. Hence the distribution is: Here we discuss the modeling of two cases that require special attention, corresponding to gaps that have a minimum length but whose actual length is unknown. These cases are: • The last occurrence of a term in a document. • The term does not occur in a document at all. We follow a standard technique from clinical trials, where a patient is observed for a certain amount of time and the observation of the study is expected in that time period (the observation might be the time until death, for example). In some cases it happens that the observation for a patient does not occur in that time period. In such a case it is assumed that the observation would occur at sometime in the future. This is called censoring at a certain point. In our case, we assume the particular term would eventually occur, but the document has ended before it occurs so we do not observe it. In our notation we observe the term n i times, so the (n i + 1) th time the term occurs is after the end of the document. Hence the distribution of w in i +1 is censored at length cen i . If cen i is small, so that the n th i occurrence of the term is near the end of the document, then it is not surprising that w in i +1 is censored. In contrast if cen i is large, so the n th i occurrence is far from the end of the document, then either it is surprising that the term did not re-occur, or it suggests the term is rare. The information about the model parameters that is given by the censored occurrence is, Also when a particular term does not occur in a document, our model assumes that the term would eventually occur had the document continued indefinitely. In this case the first occurrence is censored and censoring takes place at the document length. If a term does not occur in a long document, it suggests the term is rare. Our modeling is based on a Bayesian approach In the Bayesian approach one can assign distributions to the parameters in a model. We choose non-informative priors, as is common practice in Bayesian applications. So we put, p ∼ Unif orm(0, 1), and λ 1 ∼ Unif orm(0, 1) To tell the model that λ 2 is the larger of the two λs, we put λ 2 = λ 1 + γ, where γ > 0, and γ ∼ Unif orm(0, 1) Also cen i depends on the document length d i and the number of occurrences of the term in that document, n i . Fitting mixture techniques is tricky and Figure In the Bayesian approach of parameter estimation, the parameters are uncertain, and it is assumed that they follow some distribution. In our case the parameters and the data are defined as: Θ = {p, λ 1 , λ 2 } denote the parameters of the model. W = {w i1 , . . . , w in i , w in i +1 } denotes the data. Hence based on this we may define the following: • f ( Θ) is the prior distribution of Θ as assigned in section 3.3. It summarizes everything we know about Θ apart from the data W . • f ( W | Θ) is the likelihood function. It is our model for the data W conditional on the parameters Θ. (As well as the observed data, the likelihood also conveys the information given by the censored values) • f ( Θ| W ) is the posterior distribution of Θ, given W . It describes our beliefs about the parameters given the information we have. Deriving the density function for a parameter set Θ after observing data W , can be achieved by using Bayes Theorem as: where f ( W ) is simply a normalizing constant, independent of Θ. It can be computed in terms of the likelihood and prior as: Hence equation 1 is reduced to: So, once we have specified the posterior density function f ( Θ| W ), we can obtain the estimates of the parameters Θ by simply averaging the values generated by f ( Θ| W ). The density function of Θ i , f (Θ i | W ) can be obtained by integrating f ( Θ| W ) over the remaining parameters of Θ. But in many cases, as in ours, it is impossible to find a closed form solution of f (Θ i ). In such cases we may use a simulation process based on random numbers, Markov Chain Monte Carlo (MCMC) Gibbs Sampling Parameter estimation was carried out using Gibb's Sampling on the WinBUGS software The parameters of the model can be interpreted in the following manner: • λ 1 = 1/λ 1 is the mean of an exponential distribution with parameter λ 1 . λ 1 measures the rate at which this term is expected in a running text corpus. λ 1 determines the rarity of a term in a corpus, as it is the average gap at which the term occurs if it has not occurred recently. Thus, a large value of λ 1 tells us that the term is very rare in the corpus and vice-versa. • Similarly, λ 2 measures the within-document burstiness, i.e. the rate of occurrence of a term given that it has occurred recently. It measures the term re-occurrence rate in a burst within a document. Small values of λ 2 indicate the bursty nature of the term. • p and 1p denote, respectively, the probabilities of the term occurring with rate λ 1 and λ 2 in the entire corpus. Table We choose for evaluation, terms from the Associated Press (AP) newswire articles, as this is a standard corpus for language research. We picked terms which had been used previously in the literature Table The top part of the table consists of the very frequently occurring function words occurring frequently throughout the corpus. These statements are supported by the low values of λ 1 and λ 2 . These values are quite close, indicating that the occurrence of these terms shows low burstiness in a running text chunk. This supports our heuristics about the value of λ 1 / λ 2 , which is small for such terms. Moderate, not very high values of p also support this statement, as the term is then quite likely to be gener- Table The second part of the table contains mostly nontopical content terms as defined in the literature Terms in the third part, as expected, are topical content terms. An occurrence of such a term defines the topic or the main content word of the document or the text chunk under consideration. These terms are rare in the entire corpus, and only appear in documents that are about this term, resulting in very high values of λ 1 . Also low values of λ 2 for these terms mean that repeat occurrences within the same document are quite frequent; the characteristic expected from a topical content term. Because of these characteristics, based on our heuristics these terms have very high values of λ 1 / λ 2 , and hence are considered the most informative terms in the corpus. Here we study selected terms based on our model. These terms have been studied before by other researchers. We study these terms to compare our findings with previous work and also demonstrate the range of inferences that may be derived from our model. These terms occur an approximately equal number of times in the AP corpus, and inverse document frequency was used to distinguish between them These terms were studied in connection with fitting Poisson distributions to their term distribution Both these terms have nearly equal inverse document frequency for the AP corpus These terms were studied in the context of an adaptive language model to demonstrate the fact that the probability of a repeat occurrence of a term in a document defies the "bag of words" independence assumption In this paper we present a model for term reoccurrence in text based on gaps between successive occurrences of a term in a document. Parameter estimates based on this model reveal various characteristics of term use in a collection. The model can differentiate a term's dependence on genre and collection and we intend to investigate use of the model for purposes like genre detection, corpus profiling, authorship attribution, text classification, etc. The proposed measure of λ 1 / λ 2 can be appropriately adopted as a means of feature selection that takes into account the term's occurrence pattern in a corpus. We can capture both within-document burstiness and rate of occurrence of a term in a single model. | 605 | 2,623 | 605 |
Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset | Handing in a paper or exercise and merely receiving "bad" or "incorrect" as feedback is not very helpful when the goal is to improve. Unfortunately, this is currently the kind of feedback given by many Automatic Short Answer Grading (ASAG) systems. One of the reasons for this is a lack of content-focused elaborated feedback datasets. To encourage research on explainable and understandable feedback systems, we present the Short Answer Feedback dataset (SAF). Similar to other ASAG datasets, SAF contains learner responses and reference answers to German and English questions. However, instead of only assigning a label or score to the learners' answers, SAF also contains elaborated feedback explaining the given score. Thus, SAF enables supervised training of models that grade answers and explain where and why mistakes were made. This paper discusses the need for enhanced feedback models in real-world pedagogical scenarios, describes the dataset annotation process, gives a comprehensive analysis of SAF, and provides T5-based baselines for future comparison. 1 | Assessment and feedback are essential to highquality education Besides being cost-and time-efficient, automating assessment also offers unique teaching opportunities. As long as systems give individual, responsespecific feedback, learners may retry or take additional assignments and receive instantaneous feedback as often as they need. Additionally, knowing that a system instead of one's teacher or professor will evaluate one's assignment can also reduce anxiety and help learners focus on their work instead of worrying about their reputation In particular, Transformer models are approaching human experts' performance on specific datasets in the Automatic Short Answer Grading (ASAG) field | What are extension headers in IPv6 and where are they located in a packet? What is the main advantage of extension headers compared to IPv4? Reference Answer: Extension headers are used to extend the fixed IPv6 header with additional, optional network layer information. If present, they are located between the fixed header/main header and payload/upper-layer header/ transport-layer header. Main advantage: One of the following advantages are considered fully correct: 1. It allows the appending of new options without changing the header. 2. IPv6 packets with optional headers are typically processed faster/simpler by intermediate devices as most of the options are ignored (except "Hop-by-Hop Extension") while they are processed by all routers in IPv4 unless ordered otherwise. 1) Due to hardware constraints, some nodes may be out of the range of others. 2) Mobile routing requires more flexibility. The environment is very dynamic and the routing mechanism has to adapt to that. Verification: 0.5 out of 1.0 points (Partially Correct) While the second challenge of needing to be able to adapt to a dynamically changing Feedback: environment is correct, the first challenge stated is not a challenge specific to mobile routing. In a wired network, nodes typically don't have a direct connection to each other node as well. Table explanation may establish the necessary trust in the system's predictions. This kind of explanation is also called elaborated feedback In the Intelligent Tutoring Systems community, the need for elaborated feedback is well-known In contrast to other ASAG datasets, SAF contains detailed elaborated feedback explaining the scores assigned to learner responses. This allows for automatic scoring and opens the new task of providing response-specific, elaborated feedback illustrating a given score. The dataset currently contains 4,519 submissions, corresponding scores, and response-specific elaborated feedback. Additionally, we provide T5 While elaborated feedback datasets on language learning In recent years, the need for understandable, interpretable NLP models has been widely discussed The closest to our research is the WorldTree V2 dataset. Here, Some of the most well-known ASAG datasets stem from the SemEval 2013 challenge Lastly, structured collections of smaller and nonpublic datasets can be found in surveys by To remedy the lack of content-focused elaborated feedback datasets, we provide SAF, an English and German short answer dataset with explanations that serve as elaborated feedback. In total, the corpus contains 4,519 submissions similar to the example in Table We need reliable scoring and clear, detailed explanations to train understandable feedback models. Providing this is challenging for multiple reasons. Firstly, annotators need to have the necessary domain expertise and the pedagogical knowledge on how to provide understandable, well-received feedback. For instance, they should be aware of their A numerical value between 0 and 1 indicating the answer's correctness and completeness. Depending on the question, the range is discretized into steps, e.g. 0.125, so that the annotators do not have to make arbitrarily fine distinctions. Response Feedback Response-contingent elaborated Feedback. It explains why an answer is wrong or right without using formal error analysis An automatic labeling of the score. Includes the following labels: Incorrect (score=0), Correct (score=1), Partially Correct (all intermediate scores) feedback's emotional effect. At first glance, this may seem obvious, but it is easily overlooked in practice. An example of this became apparent during a pilot study we conducted to uncover pitfalls and train our annotators. Even though we provided guidelines on how to give feedback, questionable phrases like "This response fails to ..." were common as the annotators did not consider that the word "failing" may trigger negative associations and emotions in learners. Secondly, a common ground truth must be established for each question with clearly defined boundaries because various sources may define concepts differently. For example, the network protocol TCP alone has at least five different variations, all with unique advantages and disadvantages, leading to multiple possible answers to TCP related questions To ensure the necessary domain expertise, we selected two graduate students As can be seen in Figure Subsequently, annotators individually evaluated answers using the scoring rubric and the general annotation guideline. All English answers were annotated twice, while only half of the German answers were annotated doubly due to the prohibitive cost of experienced employees. The first step of combining the independently annotated answer files into a cohesive gold standard involved discussing disagreements with the annotators and researcher. Disagreements between the annotators were resolved by either choosing one of the annotations, compromising, or fusing them if both had merit. For example, one annotator may notice a missing fact A while the second annotator may find a mistake in B's explanation. Finally, the English gold feedback was checked by Grammarly as well as an English native speaker. Grammar and spelling mistakes were corrected, and sentences were simplified when the same information could be expressed more concisely, for example, by using the possessive form. Learners' answers were not post-processed because models would frequently encounter grammar and spelling mistakes in the wild. Therefore, this is a challenge approaches should overcome. The annotation process resulted in a corpus with the following score and label distribution seen in Table Figure To estimate our annotations' reliability, we rely on inter-annotator agreement measures. As the scores are interval scaled between 0 and 1, we report the percentage agreement and Krippendorff's Alpha. The annotators agreed on 89.46% of the cases on the English data, and α is 0.91 (N=2,112). On the German questions, the annotators agreed in 81.38% of the cases, and α is 0.78 (N=1,200). The high agreement on the overall dataset illustrates the effectiveness of our annotation process, especially when compared to the initially low agreement of α=0.36 achieved in our pilot study. We can assume the validity of our German data to be high, since our experienced annotators were also responsible for accepting or rejecting job results later on. Hence, their judgements should be consistent with the desired learning outcome. To estimate the validity of our English data, we assume that the end-of-term exam is a valid evaluation of students' knowledge. Of course, this is most likely not accurate in practice since the exam was not formally validated and only provides a snapshot of students' performance in a single 120-minute time frame. However, most of the question pool and exam structure have been employed and refined over multiple years. For this reason, we deem it a sufficient approximation. Nevertheless, the following results should be viewed as an indication of validity rather than a fact. The Spearman's rank correlation between the points achieved in the exam and the quizzes is 0.438 (p < 0.0001) with a sample size of 186. This is a moderate positive correlation between the exam and quiz results It is our responsibility to be transparent in our data collection process and protect the privacy of our learners. Our first step in this regard was to inform our learners of the data collection process. We posted to the college course's online learning platform and the description of the German job training. Both channels usually carry vital information for the learners. In our post, we • detailed how we would use the learners' answers to research and develop automatic assessment models. • asked learners to refrain from including personal information in their answers, such as names or addresses. This was also checked during the annotation process. • gave them contact information if they wanted their answers to be excluded from the data collection. We also clarified that this would not negatively impact them or their grades/access to jobs. None of the learners contacted us. • clarified that we would only release anonymized data in our publications. We anonymized German answers by stripping identifying information and randomizing the order. To anonymize the English learners' answers, we randomly assigned each group an ID. The group-to-ID mapping was done locally on one computer and was deleted after the dataset construction. Keeping a consistent group ID allows us to identify responses with quizID.questionID.groupID and, thus, publish a dataset where the other answers of a group can be incorporated to refine an assessment model. For example, responses QuizA.1.3 and QuizB.2.3 are written by the group assigned the ID 3. This characteristic is beneficial as it allows for training models that provide personalized feedback, considering the current answer and answers to related questions. Patterns of mistakes spanning multiple questions may be discovered in this setting. For example, if a group answered all performance evaluation questions incorrectly, they may not understand the probability theory underlying the questions. However, note that SAF's an-notators only considered the current answer when constructing their feedback. The goals of our experiments are threefold. Firstly, we want to provide baselines for the dataset. For this reason, it makes sense to report a wide range of metrics future work may want to utilize. Secondly, we hypothesize that including the question in the model's input would increase performance. Typically, only the student and reference answers are compared in ASAG As baselines, we utilize HuggingFace's implementation of the T5-base and mT5-base models The output is limited to 128 tokens and has the following format: "label/score feedback: feedback". We also enforce a minimum output sequence length of 11 tokens since models tended to refrain from generating feedback otherwise. In all experiments, 10% of the training data was splitoff for manual hyperparameter tuning and model selection. All models use gradient accumulation and an Adafactor We average SACREBLEU, Table We can see that T5 provides a strong baseline for this task, outperforming the majority baseline significantly. However, there is still room for improvement compared to human performance, especially on unseen questions. A closer inspection of the generated feedback also revealed that the model would often, and often senselessly, copy common phrases it saw during training with minor modifications (see Appendix B). This indicates that elaborated feedback tasks can be challenging even to large language models. Simultaneously, the models' high text similarity scores indicate a need for new evaluation metrics that measure similarity on a content-instead of lexical-level, enforcing that a text not only sounds well but also makes sense. Contrary to our belief, providing the model with more detailed scores instead of only labels during training does not improve the feedback generation's performance. It even worsens performance slightly for most metrics. On the English data, we observed that the question provided only a marginal benefit for unseen answers and a larger benefit for unseen questions. Interestingly, this trend does not seem to extend to the German dataset, as depicted in Table This paper introduces the elaborated feedback generation task. We provide a benchmarking dataset containing short answers, scores, and textual explanations of given scores to kick off this task. As of yet, the dataset consists of 4,519 submissions to German and English questions. We demonstrate SAF's reliability with high inter-annotator agreements. In Section 3.3, we presented aspects of the dataset we plan to improve. While the dataset is sizable for a manually annotated task of this complexity, it is small compared to other NLP tasks' crawled, large-scale datasets. We plan to mitigate this by incorporating additional questions in future iterations of the dataset. The focus will be on more complex questions to improve the class balance and questions of other domains and languages to increase diversity. The model's ability to general-ize to unseen questions may also benefit from a more diverse dataset. We also observed that common text similarity metrics can provide a valuable first impression of the feedback's quality but are not sufficient to fully capture it. Thus, we would recommend including humans in the evaluation loop. A possible evaluation setup could ask annotators whether the generated feedback expresses the same meaning as the reference feedback included in the dataset. We believe annotators could also carry out this task with limited background in the provided domains. Nevertheless, we provide the detailed scoring rubrics utilized by our annotators along with the dataset to support future human evaluations. Finally, the baselines presented in this paper can be improved. Considering the deep understanding human graders require for this task, we believe neuro-symbolic approaches to be an exciting avenue of future research. Current models may especially benefit from incorporating knowledge bases and other reference material. The length of questions in the training set ranged from 12 to 20 tokens with reference answers between 48 and 84 tokens. The learners' answers were between 2 and 224 tokens long (av-erage=14.7, median=11) and the corresponding feedback ranged between 2 and 71 tokens Tables 6 and 7 contain example predictions generated by the T5 wo_quest models. The examples stem from the English unseen answers test split. While the examples are handpicked, we did not choose them based on the quality of the generated feedback. Instead, they were selected to be as brief as possible while predicting the partially correct class or a matching score. This is because feedback for partially correct answers tends to be the most interesting as correct and incorrect aspects of the response are discussed. Noteworthy is that many of the phrases used in the generated feedback are common in the training set. Take the feedback 4.1 for example, the sentence "The response correctly identifies the spanning-tree property that makes it appealing for broadcast and multicast" appears 20 times in the training split. The rest of the generated feedback is similar to the following feedback which occurs once in the training set: "The link-state modification for constructing spanning trees does not explain how each node shares its multicast information with others by adding it to the link state packet. Each node then has the complete information to build a multicast spanning tree". While the model made some modifications, most of the generated feedback seems to be a collection of var-ious phrases seen during training. The generated feedback for unseen questions would also often match questions in the training set. For example, the feedback would state that the response correctly provided four challenges, even though the unseen question asked for an advantage and drawback of a completely different concept. The extension headers are placed between fixed header and payload. The main advantage is that they are optional. This means that the use of space is more efficient because there won't be any redundant information placed there in order to keep the expected length. Score & Gold Feedback 0.25 feedback: The response answers only the location of extension headers correctly. The definition is missing and the advantage is incorrect when compared to IPv4 as the option field in IPv4 is also optional. Thus, there is no efficiency benefit. Predicted Score & Feedback 1.1 0.75 feedback: the advantage part in the response is partially correct because the advantage part is somewhat ambiguous. extension headers are optional but it's more a part of description/feature instead of an advantage. Predicted Label & Feedback 1.2 partially correct. the advantage is incorrect as the option field in the ipv4 header is optional and can be placed between payload and fixed header. the advantage is incorrect as the option field in the ipv4 header is optional and can be placed between payload and fixed header. Student Answer 2 1.IPv6 extension headers contain supplementary information used by network devices (such as routers, switches, and endpoint hosts) to decide how to direct or process an IPv6 packet. 2.They are located between fixed header and payload. 3. IPv6 extension headers can be of arbitrary length.And the number of options that a packet carries are not limited. Score & Gold Feedback 1.0 feedback: The response answers the description, location of extension headers, and advantage correctly. Predicted Score & Feedback 2.1 1.0 feedback: the response answers all three parts of the question correctly. Predicted Label & Feedback 2.2 partially correct. the advantage is partially correct as it does not state what extension headers are and how they are used in ipv4 and how they are used in ipv6. the advantage is incorrect as the option field in the ipv4 header is optional and can be of arbitrary length. | 1,070 | 696 | 1,070 |
Extracting Commonsense Properties from Embeddings with Limited Human Guidance | Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy. | Automatically extracting common sense from text is a long-standing challenge in natural language processing In this paper, we study methods for reducing the amount of human input needed to learn common sense. Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer. Methods for learning this kind of common sense have been developed previously (e.g. Our architecture for relative comparisons follows the zero-shot learning paradigm | We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task. In the three-way classification task, we want to estimate the following conditional probability: For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = "elephant", O 2 = "dog", Property = "size"). The three-way classification task has been explored in previous work For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels. For example, for the property size, we pick "big" and "small". The adjective "similar" serves as the label for ≈ for all properties. Under this framework, a relative comparison question, for instance, "Is a dog bigger than an elephant?", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}. Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R ≈ , R > , our three-way model is defined as follows: for s ∈ {<, >, ≈}, where Q is an quintuple query, σ(•) is an activation function and W is a learnable weight matrix. The symbol ⊕ represents concatenation. We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task. We also experiment with generating label representations from just a single adjective (property We refer to this simpler method as PCE(one-pole). We note that in both the three-and four-way settings, the question "A>B?" is equivalent to "B<A?". We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance. We refer to our model without this technique as PCE(no reverse). The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space. This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training. For example, from a training example "dogs are smaller than elephants", the model will learn a projection that puts "dogs" relatively closer to "small," and far from "big" and "similar." Doing so may also result in projecting "dog" to be closer to "light" than to "heavy," such that the model is able to predict "dogs are lighter than elephants" despite never being trained on any weight comparison examples. Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable. Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of "N/A": We propose a method to synthesize informative queries to pose to annotators, a form of active learning Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers. As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query. However, such a greedy policy is expensive and prone to selecting outliers. Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure. A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label We now present our experimental results on both the three-way and four-way tasks. We test our three-way model on the VERB PHYSICS data set from We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens For training PCE, we use an identity activation function and apply 50% dropout. We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss. For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance. Finally, for active learning, we use Word2vec embeddings. All the models are trained on 200 random training examples to warm up. We train for 20 epochs after each label acquisition. To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning In Table Table In Table As noted above, we found a "good" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense. We analyzed the disagreements and found that they stem from two sources of subjectivity in the task. The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler. In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar. The second is that different labelers have different standards for whether a comparison is N/A. For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A. 37% of our disagreements are due to this type of subjectivity. The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are to annotation errors (one of the annotators makes mistake). Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable. After accounting for model averaging, we have the following equality: where W = W 1 ⊕ W 2 . So we can define a score of "R < " for a object with embedding X as the following: An object with a higher score for R < is more associated with the R < pole than the R > one. For example, score("elephant","small") represents how small an elephant is-a larger score indicates a smaller object. Table PCE requires labels for the poles of the target object property. Table In this paper, we presented a method for extracting commonsense knowledge from embeddings. Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work. A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work. | 496 | 534 | 496 |
Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion | Text-to-image generation is a significant domain in modern computer vision and has achieved substantial improvements through the evolution of generative architectures. Among these, there are diffusion-based models that have demonstrated essential quality enhancements. These models are generally split into two categories: pixel-level and latent-level approaches. We present Kandinsky 1 , a novel exploration of latent diffusion architecture, combining the principles of the image prior models with latent diffusion techniques. The image prior model is trained separately to map text embeddings to image embeddings of CLIP. Another distinct feature of the proposed model is the modified MoVQ implementation, which serves as the image autoencoder component. Overall, the designed model contains 3.3B parameters. We also deployed a user-friendly demo system that supports diverse generative modes such as text-to-image generation, image fusion, text and image fusion, image variations generation, and text-guided inpainting/outpainting. Additionally, we released the source code and checkpoints for the Kandinsky models. Experimental evaluations demonstrate a FID score of 8.03 on the COCO-30K dataset, marking our model as the top open-source performer in terms of measurable image generation quality. | In quite a short period of time, generative abilities of text-to-image models have improved substantially, providing users with photorealistic quality, near real-time inference speed, a great number of applications and features, including simple easyto-use web-based platforms and sophisticated AI graphics editors. This paper presents our unique investigation of latent diffusion architecture design, offering a fresh and innovative perspective on this dynamic field of study. First, we describe the new architecture of Kandinsky and its details. The demo system with implemented features of the model is also described. Second, we show the experiments, carried out in terms of image generation quality and come up with the highest FID score among existing open-source models. Additionally, we present the rigorous ablation study of prior setups that we conducted, enabling us to carefully analyze and evaluate various configurations to arrive at the most effective and refined model design. Our contributions are as follows: • We present the first text-to-image architecture designed using a combination of image prior and latent diffusion. • We demonstrate experimental results comparable to the state-of-the-art (SotA) models such as Stable Diffusion, IF, and DALL-E 2, in terms of FID metric and achieve the SotA score among all existing open source models. • We provide a software implementation of the proposed state-of-the-art method for textto-image generation, and release pre-trained models, which is unique among the topperforming methods. Apache 2.0 license makes it possible to use the model for both non-commercial and commercial purposes. • We create a web image editor application that can be used for interactive generation of images by text prompts (English and Russian languages are supported) on the basis of the proposed method, and provides inpainting/outpainting functionality. | Early text-to-image generative models, such as DALL-E This enabled a wide array of applications like 3D object synthesis Diffusion models achieve state-of-the-art results in image generation task both unconditional Text-to-image diffusion models have become a popular research direction due to the high performance of diffusion models and the ability to simply integrate text conditions with the classifierfree guidance algorithm We implemented a set of user-oriented solutions where Kandinsky model is embedded as a core imaging service. It has been done due to a variety of inference regimes, some of which need specific front-end features to perform properly. Overall, we implemented two main inference resources: Tele- FusionBrain represents a web-based image editor with such features as loading and saving images, sliding location window, erasing tools, zooming in/out, various styles selector, etc. (cf. Figure • text-to-image generation -user inputs a text prompt in Russian or English, then selects an aspect-ratio from the list (9:16, 2:3, 1:1, 16:9, 3:2), and the system generates an image; • inpainting -using the specific erasing tool, user can remove any arbitrary input image part and fill it, guided by a text prompt or without any guidance; • outpainting -input image can be extended with a sliding window that can be used as a mask for the following generation (if the window intersects any imaged area, then the empty window part is generated with or without text prompt guidance). Inpainting and outpainting options are the main image editing features of the model. Architectural details about these generation types can also be found in Figure Telegram bot contains the following image generation features (cf. Figure • text-to-image generation; • image and text fusion -user inputs an image and a text prompt to create a new image guided by this prompt; • image fusion -user inputs an image as the main one and another 'guiding' image, and the system generates their fusion; • image variations -user inputs an image, and the system generates several new images similar to the input one. In our work, we opted to deliver state-of-the-art text-to-image synthesis. In the initial stages of our research, we experimented with multilingual text encoders, such as mT5 We have set these encoders to be frozen during the training phase. The significant factor that influenced our design choice was the efficiency of training latent diffusion models, as compared to pixel-level diffusion models The construction of our model involves three primary steps: text encoding, embedding mapping (image prior), and latent diffusion. At the embedding mapping step, which we also refer to as the 20.00 GLIGEN 10 (Li et al., 2023) 21.04 Proprietary Technologies eDiff-I image prior, we use the transformer-encoder model. This model was trained from scratch with a diffusion process on text and image embeddings provided by the CLIP-ViT-L14 model. A noteworthy feature in our training process is the use of elementwise normalization of visual embeddings. This normalization is based on full-dataset statistics and leads to faster convergence of the diffusion process. We implemented inverse normalization to revert to the original CLIP-image embedding space in the inference stage. The image prior model is trained on text and image embeddings, provided by the CLIP models. We conducted a series of experiments and ablation studies on the specific architecture design of the image prior model (Table The latent diffusion part employs a UNet model along with a custom pre-trained autoencoder. Our diffusion model uses a combination of multiple condition signals: CLIP-image embeddings, CLIPtext embeddings, and XLMR-CLIP text embeddings. CLIP-image and XLMR-CLIP embeddings are merged and utilized as an input to the latent diffusion process. Also, we conditioned the diffusion process on these embeddings by adding all of them to the time-embedding. Notably, we did not skip the quantization step of the autoencoder during diffusion inference as it leads to an increase in the diversity and the quality of generated images (cf. Figure We sought to evaluate and refine the performance of our proposed latent diffusion architecture in our experimental analysis. To this end, we employed automatic metrics, specifically FID-CLIP curves on the COCO-30K dataset, to obtain the optimal guidance-scale value and compare Kandinsky with competitors (cf. Figure To ensure a comprehensive evaluation, we also included an assessment of the IF model 12 , which is the closest open-source competitor to our proposed model. For this purpose, we computed FID scores for the IF model 13 (Table However, we acknowledged the limitations of automatic metrics that become obvious when it comes to capturing user experience nuances. Hence, in addition to the FID-CLIP curves, we conducted a blind human evaluation to obtain insightful feed-11 The combination of automatic metrics and human evaluation provides a comprehensive assessment of Kandinsky performance, enabling us to make informed decisions about the effectiveness and usability of our proposed image prior to design. Our experiments and evaluations have showcased the capabilities of Kandinsky architecture in text-toimage synthesis. Kandinsky achieved the FID score of 8.03 on the COCO-30K validation set at a resolution of 256×256, which puts it in close competition with the state-of-the-art models, and among the top performers within open-source systems. Our methodical ablation studies further dissected the performance of different configurations: quantization of latent codes in MoVQ slightly improves The best FID score is achieved using Linear Prior. This configuration stands out with the best FID score of 8.03. It is an intriguing outcome: the simplest linear mapping showcased the best FID, suggesting that there might exist a linear relationship between visual and textual embedding vector spaces. To further scrutinize this hypothesis, we trained a linear mapping on a subset of 500 cat images and termed it the "cat prior". Astonishingly, this mapping displayed high proficiency (cf. Figure We presented Kandinsky, a system for various image generation and processing tasks based on a novel latent diffusion model. Our model yielded the SotA results among open-sourced systems. Additionally, we provided an extensive ablation study of an image prior to design choices. Our system is equipped with free-to-use interfaces in the form of Web application and Telegram messenger bot. The pre-trained models are available on Hugging Face, and the source code is released under a permissive license enabling various, including commercial, applications of the developed technology. In future research, our goal is to investigate the potential of the latest image encoders. We plan to explore the development of more efficient UNet architectures for text-to-image tasks and focus on improving the understanding of textual prompts. Additionally, we aim to experiment with generating images at higher resolutions and to investigate new features extending the model: local image editing by a text prompt, attention reweighting, physics-based generation control, etc. The robustness against generating abusive content remains a crucial concern, warranting the exploration of real-time moderation layers or robust classifiers to mitigate undesirable, e.g. toxic or abusive, outputs. The current system produces images that appear natural, however, additional research can be conducted to (1) enhance the semantic coherence between the input text and the generated image, and (2) to improve the absolute values of FID and image quality based on human evaluations. We performed multiple efforts to ensure that the generated images do not contain harmful, offensive, or abusive content by (1) cleansing the training dataset from samples that were marked to be harmful/offensive/abusive, and (2) detecting abusive textual prompts. While obvious queries, according to our tests, almost never generate abusive content, technically it is not guaranteed that certain carefully engineered prompts may not yield undesirable content. We, therefore, recommend using an additional layer of classifiers, depending on the application, which would filter out the undesired content and/or use image/representation transformation methods tailored to a given application. | 1,300 | 1,901 | 1,300 |
Adaptive Gating in Mixture-of-Experts based Language Models | Large language models, such as OpenAI's Chat-GPT, have demonstrated exceptional language understanding capabilities in various NLP tasks. Sparsely activated mixture-of-experts (MoE) has emerged as a promising solution for scaling models while maintaining a constant number of computational operations. Existing MoE model adopts a fixed gating network where each token is computed by the same number of experts. However, this approach contradicts our intuition that the tokens in each sequence vary in terms of their linguistic complexity and, consequently, require different computational costs. Little is discussed in prior research on the tradeoff between computation per token and model performance. This paper introduces adaptive gating in MoE, a flexible training strategy that allows tokens to be processed by a variable number of experts based on expert probability distribution. The proposed framework preserves sparsity while improving training efficiency. Additionally, curriculum learning is leveraged to further reduce training time. Extensive experiments on diverse NLP tasks show that adaptive gating reduces at most 22.5% training time while maintaining inference quality. Moreover, we conduct a comprehensive analysis of the routing decisions and present our insights when adaptive gating is used. | The field of natural language processing (NLP) has undergone a remarkable revolution driven by the rapid advancements in language models (Cha; Sparsely-activated mixture-of-experts (MoE) is a promising paradigm to address the scalability issue while maintaining a constant number of computation FLOPs Towards this end, we present our first attempt to empirically characterize and improve the efficiency of the gating mechanism in MoE. We observe that across various models and tasks, a large number of tokens display simple linguistic characteristics or a single dominant feature, which allows them to be effectively processed using just the top-1 expert. This observation suggests that the current top-2 gating strategy incurs unnecessary computation costs for a significant number of tokens. Motivated by this insight, we further introduce adaptive gating in MoE that enables tokens to be processed by a flexible number of experts depending on the gating decision. Our approach, in contrast to conventional MoE models, preserves the sparsity of MoE models while enhancing flexibility in token handling. We incorporate a threshold within the gating network to conduct adaptive token routing based on the distribution of expert probabilities. With adaptive gating, the majority of tokens use simple top-1 gating; top-2 gating is selectively applied only when necessary and beneficial, thus significantly reducing the computation cost. However, the training efficiency cannot achieve the same improvement as the computation cost due to the fact that tokens with top-2 gating always incur a longer training step, thus becoming the bottleneck. Therefore, to enhance training efficiency even further, we leverage the idea of curriculum learning by strategically adjusting the order of training data samples. We conduct extensive experiments on six NLP tasks with different encoder and decoder models. The results show that our approach can effectively reduce the end-to-end training time by at most 22.5%, while achieving comparable inference quality with top-2 gating MoE models. Moreover, we show that the tokens routed to two experts are coupled with the nature of each NLP task. For sentiment analysis, those are the tokens expressing neutral opinions; translation task pays attention to sentences with complex structure; Question and Answer connects key words in question and context and assign both with top-2 gating; summarization puts more effort in understanding the pronouns and finding tokens expressing central idea; top-2 routing decision changes along with the token to generated in text completion task and conversational tokens in dialogue response task use top-2 experts frequently. Empirically, we find that a small threshold value (i.e. 0.1, 0.2) in adaptive gating can lead to a similar performance as top-2 gating. Our contributions are as follows: • We propose adaptive gating in the MoE training scheme, which enables tokens to be processed by a flexible number of experts. • We leverage curriculum learning to alleviate the training bottleneck caused by varying execution times of tokens. • We conduct extensive experiments on various NLP tasks and datasets and present a thorough analysis of the gating decision of the tokens to prove the effectiveness and efficiency of adaptive gating. | 2.1 Mixture-of-Experts Mixture-of-Experts (MoE) has been adopted in various deep neural network models In particular, these models typically employ an MoE layer to substitute the feed-forward network (FFN) layer. The MoE layer comprises multiple FFNs, each acting as an expert, along with a gating network. Each expert i is a fully-connected twolayer network utilizing ReLU activation and with its own set of parameters. For a given token x, the output of an expert can be defined as: where W i 0 and W i 1 are the trainable weights of the two linear layers in expert i. The gating network takes in the embedding vector of each token x and multiplies them with its trainable matrix W G . The gate value for a specific token can be determined through: (2) This softmax activation R indicates the weight of each expert in processing the token. The gating network then dispatches this token to top-k experts with k highest activations. The final output of the MoE layer is: that is, the weighted sum of outputs from selected expert(s) We now discuss the design of adaptive gating in MoE for training. Observation. We first present our empirical findings from experiments with classical MoE models. Specifically, we extract the softmax activations and analyze the probability distribution of expert selection for each token in the gating network. Figures 1 depict the normalized activation values of four sampled tokens across 16 experts. We see that for tokens 1 and 4, their activations of the top-1 and top-2 expert are very close as shown in Figures Adaptive gating. Previous work has demonstrated that MoE experts specialize in different linguistic aspects. Building upon our empirical findings, one can see that many tokens can be effectively handled by a single expert during the training stage. To control the number of experts handling each token, we introduce a threshold parameter, denoted as T . If the activation value difference between the top-1 expert, denoted as i, and the top-2 expert, denoted as j, is within the threshold T , we consider the token as requiring both expert i and expert j for processing. Otherwise, we route the token only to the top-1 expert. Load balancing loss. Adaptive gating uses a flexible number of experts to process each token. This flexibility, however, adds extra difficulty to the load balancing problem in training which aims to evenly distribute tokens among all experts. As it is still important to prevent the gating network from overly concentrating on a very small number of experts, in adaptive gating, we impose the soft load balancing constraints on the top-1 gating decisions, while allowing top-2 gating decisions to be trained without any soft constraints. That is, the loss of each MoE layer i becomes: where f 1 e is the fraction of tokens dispatched to expert e among those processed by top-1 gating; p e is the average gating probability to expert e over all tokens in the current batch, and E i is the number of experts at layer i just as in classical MoE Challenge. While adaptive gating provides effective computational savings, Transformer MoE's model architecture poses a significant challenge to training efficiency. Specifically, there is a mismatch Adjust training data order. Our intuition is that the number of experts required by each token can be an indicator of the token complexity. We can therefore reorder the training data in a way that prioritizes simpler sequences during model training. Additionally, we can group together training data with similar complexity levels to minimize the bottleneck effect caused by difficult tokens in need of top-2 experts. To quantify the complexity of a training sample d, we define a complexity vector C: where L is the number of MoE layers in the model, and r i represents the ratio of tokens processed by top-2 experts to the sequence length (i.e., the total number of tokens in data sample d) in layer i. To determine the order of the training data, we identify the data sample with the fewest tokens processed by top-2 experts, and calculate the cosine similarity using complexity vectors of the remaining data samples. Training data is then reordered based on this similarity value, starting from the most similar ones. This approach allows the model to gradually learn from simpler sequences and progressively handle more complex sequences. We evaluate adaptive gating in MoE on six NLP tasks using various encoder and decoder models. We then analyze the gating decision to better understand the effectiveness of adaptive gating. Table We use the Transformer models from HuggingFace and convert the FFN layers to MoE layers We use 8 A100 GPUs, each with 40 GB memory. Data and expert parallel is used for distributed training. We distribute the experts evenly among all the GPUs. In terms of hyperparameters and model architecture, we adopt the default configurations established in the existing models We present the overall training and inference performance in Table While it is intuitive to understand that some minor tokens (e.g., "a", "the", "is") only need top-1 expert to process, this does not fully explain how and why adaptive gating works in different NLP tasks. Thus we analyze how the tokens are processed in training with adaptive gating, and make quite a few interesting observations that can help better answer this question. In a broader sense, we believe our insights are also instrumental towards building better language models. Note that when BPE tokenizer is used, we aggregate the result by mapping the tokens to the natural language word and perform analysis on the aggregated statistics. Sentiment analysis. Sentiment analysis exhibits the lowest percentage of top-2 gating among all tasks, and the percentage is stable across layers (Figure Summarization. In summarization, the percentage of tokens using two experts decreases in both encoder and decoder layers (Figure Text completion. Text completion differs from the previous tasks as it is a decoder-only and autoregressive task. The gating results in text completion are influenced by the current prediction being generated. The focus of tokens changes dynamically based on the current prediction. It is challenging to identify specific types of tokens that consistently receive two experts. When predicting a pronoun, for example, the focus shifts to the names of individuals. Similar patterns can be observed for numbers and dates. Additionally, we find that the percentage of tokens routed to two experts is linked to the length of the current sequence. Longer sequences have a higher percentage of top-2 gating. Dialogue response. Dialogue response, compared to text completion, requires more understanding of the narrative input and the dialogue history. We find that lots of effort are put into processing dialogue history. First, one key distinction is that tokens with a conversational meaning occur much more frequently. These words lack informative content but serve to express human-like sentiments, such as gratitude and politeness. We infer that routing these tokens for two experts indicates that there is a difference between the conversational usage and written text and it is also critical to learn where and when these words should be used. Second, given the nature of the dialogue, many conversations are based on underlying assumptions and conditions. Related tokens are usually processed with two tokens to improve the understanding of the context. For instance, the dialogue example provided in Table 4 is built on top of a scenario assuming that "Johnathan tells his parents that he is gay" and asks the model to answer questions with this condition. Threshold T in adaptive gating. We now conduct an ablation study on the threshold T introduced in adaptive gating. Increasing the threshold value results in a less sparse model, where more tokens are assigned to the top-2 gating mechanism, subsequently increasing the computational FLOPs. Table Choice of k. Adaptive gating in MoE currently is limited to top-k gating, where k can be either 1 or 2. This is built on the common practice in extensive prior work that top-2 gating shows a promissing resut in MoE. Further evaluation is necessary to validate the performance of a wider range of k values. Our experiments were conducted on a diverse set of NLP tasks and datasets, but it is essential to note that the effectiveness and efficiency of adaptive MoE may vary depending on the specific task characteristics. Different tasks may exhibit distinct patterns and complexities, which can impact the performance and generalizability of the proposed approach. Further investigation and evaluation on a wider range of tasks would provide a more comprehensive understanding of the limitations and applicability of adaptive MoE. This paper demonstrates the effectiveness and flexibility of adaptive gating in MoE models for a wide range of natural language processing tasks. By dynamically adjusting the number of experts based on token characteristics, we achieve improved training efficiency without compromising inference performance. Additionally, the integration of curriculum learning allows us to tackle the challenge of varying execution times, thereby reducing training costs. Our research sheds light on the trade-off between training efficiency and model performance in sparse and dynamic MoE networks, offering valuable insights for the development of more scalable and adaptable language models. | 1,313 | 3,308 | 1,313 |
Restricted Recurrent Neural Tensor Networks: Exploiting Word Frequency and Compositionality | Increasing the capacity of recurrent neural networks (RNN) usually involves augmenting the size of the hidden layer, with significant increase of computational cost. Recurrent neural tensor networks (RNTN) increase capacity using distinct hidden layer weights for each word, but with greater costs in memory usage. In this paper, we introduce restricted recurrent neural tensor networks (r-RNTN) which reserve distinct hidden layer weights for frequent vocabulary words while sharing a single set of weights for infrequent words. Perplexity evaluations show that for fixed hidden layer sizes, r-RNTNs improve language model performance over RNNs using only a small fraction of the parameters of unrestricted RNTNs. These results hold for r-RNTNs using Gated Recurrent Units and Long Short-Term Memory. | Recurrent neural networks (RNN), which compute their next output conditioned on a previously stored hidden state, are a natural solution to sequence modeling. In this paper, we propose the Restricted RNTN (r-RNTN) which uses only K < |V | recurrence matrices. Given that |V | words must be assigned K matrices, we map the most frequent K -1 words to the first K -1 matrices, and share the K-th matrix among the remaining words. This mapping is driven by the statistical intuition that frequent words are more likely to appear in diverse contexts and so require richer modeling, and by the greater presence of predicates and function words among the most frequent words in standard corpora like COCA | We focus on related work that addresses language modeling via RNNs, word representation, and conditional computation. Given a sequence of words (x 1 , ..., x T ), a language model gives the probability P (x t |x 1...t-1 ) for t ∈ [1, T ]. Using a RNN, where where i(z) maps a hot-one encoded vector to its integer representation. Thus the U h tensor is composed of |V | recurrence matrices, and at each step of sequence processing the matrix corresponding to the current input is used to transform the hidden state. The authors also proposed m-RNN, a factorization of the to reduce the number of parameters, where v xt is a factor vector of the current input x t , but like the RNTN, memory still grows linearly with |V |. The RNTN has the property that input symbols have both a vector representation given by the embedding and a matrix representation given by the recurrence matrix, unlike the s-RNN where symbols are limited to vector representations. The integration of both vector and matrix representations has been discussed but with a focus on representation learning and not sequence modeling Irsoy and Cardie (2014) used m-RNNs for the task of sentiment classification and obtained equal or better performance than s-RNNs. Methods that use conditional computation Whereas our work is concerned with updating the network's hidden state, To balance expressiveness and computational cost, we propose restricting the size of the recurrence tensor in the RNTN such that memory does not grow linearly with vocabulary size, while still keeping dedicated matrix representations for a subset of words in the vocabulary. We call these Restricted Recurrent Neural Tensor Networks (r-RNTN), which modify eq. ( where U h is a tensor of K < |V | matrices of size H × H, b h is a H × K bias matrix with columns indexed by f . The function f (w) maps each vocabulary word to an integer between 1 and K. We use the following definition for f : where rank(w) is the rank of word w when the vocabulary is sorted by decreasing order of unigram frequency. This is an intuitive choice because words which appear more often in the corpus tend to have more variable contexts, so it makes sense to dedicate a large part of model capacity to them. A second argument is that frequent words tend to be predicates and function words. We can imagine that predicates and function words transform the meaning of the current hidden state of the RNN through matrix multiplication, whereas nouns, for example, add meaning through vector addition, following We also perform initial experiments with r-RNTNs using LSTM and GRUs. A GRU is described by and an LSTM by We create r-RNTN GRUs (r-GRU) by making U h h and b h h input-specific (as done in eq. ( We evaluate s-RNNs, RNTNs, and r-RNTNs by training and measuring model perplexity (PPL) on the Penn Treebank (PTB) corpus For an r-RNTN with H = 100, we vary the tensor size K from 1, which corresponds to the s-RNN, all the way up to 10000, which corresponds to the unrestricted RNTN. As a simple way to evaluate our choice of rank-based mapping function f , we compare it to a pseudo-random variant: We also compare results to 1) an s-RNN with H = 150, which has the same number of parameters as an r-RNTN with H = 100 and K = 100. 2) An m-RNN with H = 100 with the size of factor vectors set to 100 to match this same number of parameters. 3) An additional r-RNTN with H = 150 is trained to show that performance scales with H as well. We split each sentence into 20 word subsequences and run stochastic gradient descent via backpropagation through time for 20 steps without mini-batching, only reseting the RNN's hidden state between sentences. Initial learning rate (LR) is 0.1 and halved when the ratio of the validation perplexity between successive epochs is less than 1.003, stopping training when validation improvement drops below this ratio for 5 consecutive epochs. We use Dropout Finally, we train GRUs, LSTMs, and their r-RNTN variants using the PTB corpus and parameters similar to those used by Results are shown in fig. Comparing the r-RNTN to the baseline s-RNN with H = 100 (fig. It is remarkable that even with K as small as 100, the r-RNTN approaches the performance of the RNTN with a small fraction of the parameters. This reinforces our hypothesis that complex transformation modeling afforded by distinct matrices is needed for frequent words, but not so much for infrequent words which can be well represented by a shared matrix and a distinct vector embedding. As shown in table 1, with an equal number of parameters, the r-RNTN with f mapping outperforms the s-RNN with a bigger hidden layer. It appears that heuristically allocating increased model capacity as done by the f based r-RNTN is a better way to increase performance than sim-ply increasing hidden layer size, which also incurs a computational penalty. Although m-RNNs have been successfully employed in character-level language models with small vocabularies, they are seldom used in wordlevel models. The poor results shown in table 1 could explain why. In this paper, we proposed restricted recurrent neural tensor networks, a model that restricts the size of recurrent neural tensor networks by mapping frequent words to distinct matrices and infrequent words to shared matrices. r-RNTNs were motivated by the need to increase RNN model capacity without increasing computational costs, while also satisfying the ideas that some words are better modeled by matrices rather than vectors Interestingly, results for s-RNNs and r-GRUs suggest that given the same number of parameters, it is possible to obtain higher performance by increasing K and reducing H. This is not the case with r-LSTMs, perhaps to due to our choice of which of the recurrence matrices to make inputspecific. We will further investigate both of these phenomena in future work, experimenting with different combinations of word-specific matrices for r-GRUs and r-LSTMs (rather than only U h h and U c h ), and combining our method with recent improvements to gated networks in language modeling | 801 | 698 | 801 |
Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward | Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive. We argue that, to address these issues, the summarizer should acquire semantic interpretation over input, e.g., via structured representation, to allow the generation of more informative summaries. In this paper, we present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD. We propose the use of dual encoders-a sequential document encoder and a graphstructured encoder-to maintain the global context and local characteristics of entities, complementing each other. We further design a reward based on a multiple choice cloze test to drive the model to better capture entity interactions. Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets. We also obtain better or comparable performance compared to systems that are finetuned from large pretrained language models. Human judges further rate our model outputs as more informative and containing fewer unfaithful errors. | Abstractive summarization aims to produce concise and informative summaries with the goal of promoting efficient information consumption and knowledge acquisition | The Week column. Mayor John Fabrizi of Brigeport, Conn, publicly admits he used cocaine and abused alcohol while in office; says he stopped drinking alcohol and sought help for his drug problem about 18 months ago. learning objectives, these models frequently produce unfaithful content To this end, we present ASGARD, a framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD. 1 Under the encoder-decoder framework, we enhance the regular document encoder with a separate graph-structured encoder to maintain the global context and local characteristics of entities by using the outputs from an open information extraction (OpenIE) system. Specifically, we experiment with two graph variants, one mainly capturing entities' document-level interactions and the other reflecting such interactions within each paragraph plus topic shifts across paragraphs. Both graphs can capture interactions among entities that are positioned far from one another in the document and significantly reduce redundancy, as shown in Fig. Moreover, we propose a novel multi-choice cloze reward to drive the model to acquire semantic understanding over the input. Concretely, we design cloze questions by removing pairwise entities that are connected with a predicate or co-occur in a human summary sentence, whereas prior work only considers single entities to construct questions We carry out automatic and human evaluations on popular summarization datasets. Models based on ASGARD yield significantly better ROUGE scores The rest of the paper is organized as follows. We describe related work in the next section ( § 2). We then discuss the knowledge graph construction in § 3 and formulate our graph-augmented summarization framework in § 4. In § 5, we introduce reinforcement learning with cloze reward. Experiments and results are presented in § 6 and § 7. Finally, we conclude in § 8. Graph-Augmented Summarization and Generation. Graph structures have long been used for extractive summarization, such as in Textrank Since question answering (QA) has been used for summary evaluation To construct a knowledge graph from an input document, we utilize Stanford CoreNLP In this section, we describe our graph-augmented abstractive summarization framework, as displayed in Fig. Document Encoder. We first feed input x to RoBERTa Graph Encoder. Built on the graph constructed in § 3, we create nodes for predicates as done in previous graph-to-sequence work Node Initialization. Each node often contains multiple mentions of an entity; we thus initialize node representation v i by using the average embedding of its tokens. We leverage document encoder hidden states h k as the contextual representation of tokens. Number of mentions in the node is added as an extra encoding to v i , to signify entity salience. Contextualized Node Encoding. Our graph encoder improves upon Graph Attention Networks (GATs) α n i,j W 0,n v j (1) where N n=1 denotes the concatenation of N heads, each producing a vector of the same dimension as v i . We use N = 4 in our experiments with two layers of GATs. N (v i ) denotes the neighbors of v i in graph G. W * are trainable parameters. The graph encoder described above encodes document-level global context by merging entity mentions throughout the document and capturing their interactions with graph paths. It is henceforth denoted as DOCGRAGH. Encoder Extension to Capture Topic Shift (SEGGRAGH). Modeling topic transitions and recurrences enables the identification of notable content, thus benefiting summarization Our summary decoder uses a single-layer unidirectional LSTM with a hidden state s t at step t; it generates summary tokens recurrently by jointly attending to the input document and the graph. Attending the Graph. At each decoding step t, we compute a graph context vector c v t with the attention mechanism where u * are also trainable parameters. We omit bias terms for simplicity. Attending the Document. Similarly, the document context c t is computed over input tokens by additionally considering the graph context c v t : Token Prediction. Graph and document context vectors, treated as salient content summarized from both sources, are concatenated with the decoder hidden state s t to produce the vocabulary distribution P vocab : We use weight-sharing between the input embedding matrix and the matrix W out to allow reusing linguistic knowledge as proposed by where y t-1 denotes the embedding for the token predicted at step t -1. Modified Hierarchical Attention for SegGraph. As mentioned in § 4.1, SegGraph captures content salience by modeling topic shift across paragraphs. We thus seek to leverage paragraph-level importance to redistribute the node attentions, e.g., giving more attentions to nodes in important paragraphs. In particular, we utilize hierarchical attention We first consider a maximum likelihood (ML) training objective that minimizes the following loss: where x are documents and y are references from the training set D, and θ are model parameters. Node Salience Labeling. In addition to modeling local characteristics of nodes, we further enhance the model by adding an objective to label node salience, e.g., whether the entities in a node are mentioned in the reference summaries. We introduce a soft mask layer over each node before it is passed into the graph encoder, to signify its salience. This layer, serving as an information gate, predicts a real number m i in [0, 1] for each node v i and multiplies to itself, i.e. m i v i . For node v i , the mask is calculated as mi = sigmoid(u 2 v i ). During training, the gold-standard mask m i for a node is set to 1 if it contains at least one content word in the reference summary; otherwise, 0. We add the following objective for all nodes in the dataset D: where N v represents the number of nodes in the dataset. Finally, the ML training objective takes the following form: L ml = L mask + L seq . After maximum likelihood training with L ml , we further design a multiple choice cloze reward in a second-stage reinforcement learning (RL), leading the model to generate more faithful and informative summaries. For RL, we use a self-critical policy gradient algorithm Our reward function uses the combination of ROUGE and the multiple choice cloze score introduced below, i.e., R(y) = R rouge (y) + γ cloze R cloze . For ROUGE, it considers F1 scores of ROUGE-1, ROUGE-2, and ROUGE-L calculated against the reference summary, and takes the form of R rouge (y Multiple Choice Cloze Reward. Here, we present a novel multiple choice cloze reward to work with our knowledge graph and guide the summarization model towards improved awareness of entity interactions. We treat the systemgenerated summary as context. We provide a set of questions automatically constructed from the corresponding reference summary written by a human. We separately train a question answering (QA) model to address the questions by reading the context. Intuitively, if the system summary shares salient information with the reference, the QA model will assign the correct answers with high probability. We decide to use the average probability of the correct answers as our cloze reward. Below, we give details on how to construct the questions and candidate answers with examples shown in Fig. We first select OpenIE triples from the salient context and filter out those that have any overlapping content word with the correct answer. For argument pair questions, we create one candidate answer by swapping the subject and the object (e.g. candidate B as in Fig. In case reference summaries do not yield Ope-nIE triples, we create additional entity pair questions. We remove two co-occurring entities from the summary and create three candidate answers in the same way as described above. QA Model. We fine-tune RoBERTa Datasets. We experiment with two popular summarization datasets with summaries containing multiple sentences: the New York Times annotated corpus (NYT) For NYT, we add results by SENECA model On CNN/Daily Mail, we include comparisons of a two-stage fine-tuned model (first on an extractor, then on an abstractor) with BERT (Liu and Lapata, 2019) (BERTSUMEXTABS), and a unified pretrained language model for generation In addition to ASGARD-DOC and ASGARD-SEG, which are trained with an ML objective, we report results trained with ROUGE as the reward (R rouge ), and with an additional cloze reward (R cloze ). Lastly, we consider a variant NOGRAPH by ablating the graph encoder. Results on NYT. As displayed in Table Results on CNN/DM. We observe similar trends on the CNN/DM articles as shown in ticeably, ASGARD-DOC trained with the combined ROUGE and cloze reward produces better ROUGE scores than BERTSUMEXTABS and UNILM, which are carefully fine-tuned from large pretrained language models, and the numbers are also comparable to the fine-tuned BART. Evaluation with Cloze Test. We further evaluate model-generated summaries with our proposed cloze test. Here, we report two scores in Fig. We further conduct human evaluation to analyze the informativeness and fluency of the generated summaries, as well as to investigate the unfaithful errors made by different models. We sample 100 articles from the NYT test set and hire three native or fluent speakers of English to rate summaries generated by our two systems, NOGRAPH+R rouge and ASGARD-SEG+R rouge + R cloze , along with outputs by BART and human-written summaries (presented in random order). After reading the articles, each judge scores summaries on a Likert scale from 1 (worst) to 5 (best) on informativeness-whether the summary covers important information from the input, and fluency-whether the summary is grammatically correct. We consider three types of unfaithful errors: (i) hallucination error-creating content not present in the input, (ii) out-of-context error-generating facts without including required context or within incorrect context, and (iii) deletion or substitution error-mistakenly deleting or substituting subjects, objects, or clauses. We ask the annotators to label each type as 1 for existence of errors, and 0 otherwise. Detailed guidelines are in the Appendices. From Table For unfaithful errors, we report the percentage of errors calculated by majority voting (i.e., more than one annotator vote as incorrect). First, we find that our ASGARD-SEG model has a comparable error pattern as human summaries. Specifically, for out-of-context and deletion or substitution errors, our graph-enhanced model produces significantly fewer mistakes in these categories, compared to the model without graph information. This implies that knowledge graph-enhanced models can improve summary faithfulness. Interestingly, human-written summaries are also discerned to contain a nontrivial amount of hallucination errors. After inspection, we find that human tends to leverage world knowledge to include content that is not covered by the articles. For instance, for an article discussing events in "Boston", the human writer may describe them as happening in "Massachusetts" in the summary. We further plot the distributions of automatic evaluation scores regarding the three types of unfaithful errors based on majority voting in Fig. Nevertheless, with regard to hallucination errors, we do not see such pattern; there is even a slightly reversed relation with both cloze scores and ROUGE scores, wherein summaries with more hallucination errors tend to score higher. This echos our previous observation that human summaries can be hallucinatory too, where world knowledge is used for writing the summaries. We presented a novel knowledge graph-augmented abstractive summarization framework, along with a novel multiple choice cloze reward for reinforcement learning. Our models capture both local characteristics and global interactions of entities from the input, thus generating summaries of higher quality. In tandem with the graph representation, our cloze reward further improves summary content. Human evaluation further confirms that our graphaugmented models trained with the cloze reward produce more informative summaries and significantly reduces unfaithful errors. Training Details. We utilize Adam (Kingma and Ba, 2015) with a gradient clipping of 2.0 and a batch size of 32 for all models. During ML training, a learning rate of 0.001 is used; during RL stage, it is reduced to 0.0001 We use the base version of BERT model In our human evaluation, each human annotator is presented with 100 news articles. The annotators are asked to evaluate four summaries (in random order) for each article on two aspects (informativeness and fluency) on a scale of 1 to 5 (1 being very poor and 5 being very good). Furthermore, for unfaithfulness, we define three types of unfaithful errors and ask annotators to label whether summaries contain any type of error. Instructions in Table Here are descriptions of the aspects: • Informativeness: Whether the summary provides enough and necessary content coverage from the input article. • Fluency: Whether the summary is free of obvious grammatically incorrect sentences (e.g., fragments, missing components) that make the text difficult to read. • Faithfulness: Whether the summary accords with the facts expressed in the source. | 1,248 | 162 | 1,248 |
Unsupervised Dual Paraphrasing for Two-stage Semantic Parsing | One daunting problem for semantic parsing is the scarcity of annotation. Aiming to reduce nontrivial human labor, we propose a two-stage semantic parsing framework, where the first stage utilizes an unsupervised paraphrase model to convert an unlabeled natural language utterance into the canonical utterance. The downstream naive semantic parser accepts the intermediate output and returns the target logical form. Furthermore, the entire training process is split into two phases: pre-training and cycle learning. Three tailored self-supervised tasks are introduced throughout training to activate the unsupervised paraphrase model. Experimental results on benchmarks OVERNIGHT and GE-OGRANNO demonstrate that our framework is effective and compatible with supervised training. | Semantic parsing is the task of converting natural language utterances into structured meaning representations, typically logical forms b). Researchers use crowdsourcing to paraphrase those canonical utterances into natural language utterances (the upper part of Figure Annotators may struggle to understand the exact meanings of canonical utterances. Some canonical utterances even incur ambiguity, which enhances the difficulty of annotation. Furthermore, In this work, inspired by unsupervised neural machine translation Paraphrasing aims to perform semantic normalization and reduce the diversity of expression, trying to bridge the gap between natural language and logical forms. The naive neural semantic parser learns inner mappings between canonical utterances and logical forms, as well as the structural constraints. The unsupervised paraphrase model consists of one shared encoder and two separate decoders for natural language and canonical utterances. In the pre-training phase, we design three types of noise (Section 3.1) tailored for sentence-level denoising autoencoder We conduct extensive experiments on benchmarks OVERNIGHT and GEOGRANNO, both in unsupervised and semi-supervised settings. The results show that our method obtains significant improvements over various baselines in unsupervised settings. With full labeled data, we achieve new state-of-the-art performances (80.1% on OVERNIGHT and 74.5% on GEOGRANNO), not considering additional data sources. The main contributions of this work can be summarized as follows: • A two-stage semantic parser framework is proposed, which casts parsing into paraphrasing. No supervision is provided in the first stage between input natural language utterances and intermediate output canonical utterances. • In unsupervised settings, experimental results on datasets OVERNIGHT and GEOGRANNO demonstrate the superiority of our model over various baselines, including the supervised method in • The framework is also compatible with traditional supervised training and achieves the new state-of-the-art performances on datasets OVERNIGHT (80.1%) and GEOGRANNO (74.5%) with full labeled data. 2 Our Approach | For the rest of our discussion, we use x to denote natural language utterance, z for canonical utterance, and y for logical form. X , Z and Y represent the set of all possible natural language utterances, canonical utterances, and logical forms respectively. The underlying mapping function f : Z -→ Y is dominated by grammar rules. We can train a naive neural semantic parser P nsp using attention As for the paraphrase model (see Figure Given an input utterance x ∈ X , the paraphrase model D z • E converts it into possible canonical utterance ẑ = D z • E(x); then ẑ is passed into the pre-trained naive parser P nsp to obtain predicted logical form ŷ = P nsp • D z • E(x). Another paraphrase model, D x • E, is only used as an auxiliary tool during training. To train an unsupervised paraphrase model with no parallel data between X and Z, we split the entire training procedure into two phases: pre-training and cycle learning. D x • E and D z • E are first pretrained as denoising auto-encoders (DAE). This initialization phase plays a significant part in accelerating convergence due to the ill-posed nature of paraphrasing tasks. Next, in the cycle learning phase, we employ both back-translation (BT) and dual reinforcement learning (DRL) strategies for self-training and exploration. In this phase, we initialize the paraphrase model via the denoising auto-encoder task. All auxiliary models involved in calculating rewards (see Section 3.2) are also pre-trained. where Θ Dx•E and Θ Dz•E are parameters for the system. The training framework till now is just a noisycopying model. To improve upon it, we adopt two schemes in the cycle learning phase, backtranslation (BT) and dual reinforcement learning (DRL), see Figure Back-translation In this task, the shared encoder E aims to map the input utterance of different types into the same latent space, and the decoders need to decompose this representation into the utterance of another type. More concretely, given a natural language utterance x, we use paraphrase model D z • E in evaluation mode with greedy decoding to convert x into canonical utterance ẑ. We will obtain pseudo training sample (ẑ, x) for paraphrase model D x • E. Similarly, (x, z) pair can be synthesized from model D x • E given canonical utterance z. Next, we train the paraphrase model from these pseudo-parallel samples and update parameters by minimizing The updated model will generate better paraphrases during the iterative process. Dual reinforcement learning Back-translation pays more attention to utilize what has been learned by the dual model, which may lead to a local optimum. To encourage more trials during cycle learning, we introduce the dual reinforcement learning strategy and optimize the system through policy gradient Starting from a natural language utterance x, we sample one canonical utterance z through D z • E. Then, we evaluate the quality of z from different aspects (see Section 3.2) and obtain reward R x (z). Similarly, we calculate reward R z (x) for sampled natural language utterance x. To cope with high variance in reward signals, we increase sample size to K and re-define reward signals via a baseline b(•) to stabilize learning: (take zk for an example) We investigate different baseline choices (such as running mean, cumulative mean of history, and reward of the greedy decoding prediction), and it performs best when we use the average of rewards within samples of per input, especially with larger sample size. The training objective is the negative sum of expected reward: The gradient is calculated with REIN-FORCE (Williams, 1992) algorithm: The complete loss function in the cycle learning phase is the sum of cross entropy loss and policy gradient loss: L Cycle = L BT + L DRL . The entire training procedure is summarized in Algorithm 1. • E (M ) Pre-training phase 1: Pre-train all auxiliary models: language models LM x and LM z , naive neural semantic parser P nsp and utterance discriminator P dis 2: Pre-train paraphrase models D x • E (0) and D Generate ẑ via model Use (ẑ, x) and (x, z) as pseudo samples, calculate loss L BT based on Eq.2; Dual Reinforcement Learning 9: Sample z via model Update model parameters, get new models In this section, we elaborate on different types of noise used in our experiment and the reward design in dual reinforcement learning. We introduce three types of noise to deliberately corrupt the input utterance in the DAE task. Importance-aware word dropping Traditional word dropping where w(x i ) is the word count of x i in X , and p max is the maximum dropout rate (p max = 0.2 in our experiment). As for canonical utterances, we apply this word dropping similarly. Mixed-source addition For any given raw input, it is either a natural language utterance or a canonical utterance. This observation discourages the shared encoder E to learn a common representation space. Thus, we propose to insert extra words from another source into the input utterance. As for noisy channel N x (•), which corrupts a natural language utterance, we first select one candidate canonical utterance z; next, 10%-20% words are randomly sampled from z and inserted into arbitrary position in x, see Table To pick candidate z with higher relevance, we use a heuristic method: C canonical utterances are randomly sampled as candidates (C = 50); we choose z that has the minimum amount of Word Mover's Distance concerning x (WMD, In order to provide more informative reward signals and promote the performance in the DRL task, we introduce various rewards from different aspects. Fluency The fluency of an utterance is evaluated by a length-normalized language model. We use individual language models (LM x and LM z ) for each type of utterances. As for a sampled natural language utterance x, the fluency reward is As for canonical utterances, we also include an additional 0/1 reward from downstream naive semantic parser to indicate whether the sampled canonical utterance z is well-formed as input for P nsp . ŷ =arg max y P nsp (y|z), greedy decoding + I • {no error while executing ŷ} Style Natural language utterances are diverse, casual, and flexible, whereas canonical utterances are generally rigid, regular, and restricted to some specific form induced by grammar rules. To distinguish their characteristics, we incorporate another reward signal that determine the style of the sampled utterance. This is implemented by a CNN discriminator where P dis (•) is a pre-trained sentence classifier that evaluates the probability of the input utterance being a canonical utterance. Relevance Relevance reward is included to measure how much content is preserved after paraphrasing. We follow the common practice to take the loglikelihood from the dual model. Some other methods include computing the cosine similarity of sentence vectors or BLEU score In this section, we evaluate our system on benchmarks OVERNIGHT and GEOGRANNO in both un-supervised and semi-supervised settings. Our implementations are public available OVERNIGHT It contains natural language paraphrases paired with logical forms over 8 domains. We follow the traditional 80%/20% train/valid to choose the best model during training. Canonical utterances are generated with tool SEMPRE GEOGRANNO Due to the language mismatch problem Throughout the experiments, unless otherwise specified, word vectors are initialized with Glove6B Supervised settings This is the traditional scenario, where labeled (x, y) pairs are used to train a one-stage parser directly, (x, z) and (z, y) pairs are respectively used to train different parts of a two-stage parser. We split all methods into two categories: one-stage and two-stage. In the one-stage parser, EMBED semantic parser is merely trained on (z, y) pairs but evaluated on natural language utterances. Contextual embeddings ELMo Semi-supervised settings To further validate our framework, based on the complete model in unsupervised settings, we also conduct semisupervised experiments by gradually adding part of labeled paraphrases with supervised training into the training process (both pre-training and cycle learning phase). As Table (2) Not surprisingly, model performance is sensitive to the word embedding initialization. On OVERNIGHT, directly using raw Glove6B word vectors, the performance is the worst among all baselines (19.7%). Benefiting from pre-trained embeddings ELMo or Bert, the accuracy is dramatically improved (26.2% and 32.7%). (3) When we share the encoder module in a one-stage parser for multi-tasking (MULTITASKDAE), the performance is not remarkably improved, even slightly lower than EMBED+BERT (31.9% compared to 32.7%, 38.1% to 40.7%). We hypothesize that a semantic parser utilizes the input utterance in a way different from that of a denoising auto-encoder, As for semi-supervised results: (1) when only 5% labeled data is added, the performance is dramatically improved from 60.7% to 68.4% on OVERNIGHT and 63.7% to 69.4% on GE-OGRANNO. (2) With 30% annotation, our system is competitive (75.0%/71.6%) to the neural network model using all data with supervised training. (3) Compared with the previous result reported in From the experimental results and Figure (2) It is also compatible with traditional supervised training and can easily scale up to handle more labeled data. In this section, we analyze the influence of each noise type in the DAE task and different combinations of schemes in the cycle learning phase on dataset OVERNIGHT. Table According to results in Table The most striking observation arising from Table In Table Annotation for Semantic Parsing Semantic parsing is always data-hungry. However, the annotation for semantic parsing is not user-friendly. Many researchers have attempted to relieve the burden of human annotation, such as training from weak supervision Unsupervised Learning for Seq2Seq Models Seq2Seq A.1 Model Implementations In this section, we give a full version discussion about all models used in our two-stage semantic parsing framework. Unsupervised paraphrase model We use traditional attention (1) a shared encoder encodes the input utterance x into a sequence of contextual representations h through a bi-directional single-layer LSTM (2) on the decoder side, a traditional LSTM language model at the bottom is used to model dependencies in target utterance z (φ is the embedding function on target side) s t =f LSTM (φ(z t-1 ), s t-1 ) s 0 =0-vector x, we pass it into the model D z • E and obtain a canonical utterance ẑ via greedy decoding. Then ẑ is forwarded into the dual paraphrase model D x •E. By measuring the BLEU score between raw input x and reconstructed utterance x, we obtain one metric BLEU (x, x). In the reverse path, we will obtain another metric by calculating the overall accuracy between raw canonical utterance z and its reconstructed version ẑ through the naive semantic parser P nsp . The overall metric for model selection is (λ is a scaling hyper-parameter, set to 4 in our experiments) Auxiliary models The naive semantic parser P nsp is another Seq2Seq model with exactly the same architecture as D z • E. We do not incorporate copy mechanism cause it has been proven useless on dataset OVERNIGHT | 779 | 2,170 | 779 |
Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations | We examine the effects of contrastive visual semantic pretraining by comparing the geometry and semantic properties of contextualized English language representations formed by GPT-2 and CLIP, a zero-shot multimodal image classifier which adapts the GPT-2 architecture to encode image captions. We find that contrastive visual semantic pretraining significantly mitigates the anisotropy found in contextualized word embeddings from GPT-2, such that the intra-layer self-similarity (mean pairwise cosine similarity) of CLIP word embeddings is under .25 in all layers, compared to greater than .95 in the top layer of GPT-2. CLIP word embeddings outperform GPT-2 on wordlevel semantic intrinsic evaluation tasks, and achieve a new corpus-based state of the art for the RG65 evaluation, at .88. CLIP also forms fine-grained semantic representations of sentences, and obtains Spearman's ρ = .73 on the SemEval-2017 Semantic Textual Similarity Benchmark with no fine-tuning, compared to no greater than ρ = .45 in any layer of GPT-2. Finally, intra-layer self-similarity of CLIP sentence embeddings decreases as the layer index increases, finishing at .25 in the top layer, while the self-similarity of GPT-2 sentence embeddings formed using the EOS token increases layer-over-layer and never falls below .97. Our results indicate that high anisotropy is not an inevitable consequence of contextualization, and that visual semantic pretraining is beneficial not only for ordering visual representations, but also for encoding useful semantic representations of language, both on the word level and the sentence level. | Large-scale "natural language supervision" using image captions collected from the internet has enabled the first "zero-shot" artificial intelligence (AI) image classifiers, which allow users to create their own image classes using natural language, yet outperform supervised models on common language-and-image tasks The CLIP ("Contrastive Language Image Pretraining") image classification model introduced by model trained solely on next-word prediction, we can directly compare representations formed using the same architecture, but for two very different objectives: one solely linguistic, the other visual semantic. We observe differences between representations formed by GPT-2 and the CLIP language model ("LM") both on the word level and on the sentence level. We outline our contributions: 1. As shown in Figure | in CWEs which outperform other static and contextualized word embeddings on wordlevel intrinsic evaluation tasks. CLIP word embeddings obtained in a "decontextualized" setting (wherein the model is given only the word with no other context) set new state of the art for a corpus-based method on the RG65 intrinsic evaluation task 3. Contrastive visual semantic pretraining encodes semantically useful sentence representations which obtain Spearman's ρ = .73 on the SemEval-2017 Semantic Textual Similarity (STS) Benchmark using the cosine similarity between sentence pairs. CLIP results on the STS benchmark outperform those of GPT-2, which never exceed ρ = .45 in any layer of the model. Moreover, we find that while GPT-2 sentence embeddings formed using the end-of-sequence (EOS) token exhibit intralayer self-similarity ≥ .97 in all layers, the self-similarity of CLIP sentence embeddings steadily decreases over the layers of the model, from .98 to .25 in the top layer, indicating that the contrastive visual semantic pretraining objective of the model forces the formation of fine-grained semantic representations of sentences, such that they can be associated with encoded images. We make our code and data available at We review prior work on visual semantic AI, on the geometry and semantic properties of representations formed by language models, and on semantic intrinsic evaluation tasks. We examine CLIP and GPT-2, both of which are "foundation models," a term coined by GPT-2 is a contextualizing language model, meaning that it forms word representations which incorporate information from surrounding words ("context") CLIP is a "multimodal" model which combines language and image representations in a single joint visual semantic embedding space Ethayarajh (2019) find that CWEs in ELMo We report results on five word-level tasks: • RG-65 • WordSim-353, a word relatedness task consisting of 353 word pairs divided into two sets • SimLex-999, a word similarity task consisting of 666 noun-noun word pairs, 222 verb-verb word pairs, and 111 adjective-adjective word pairs • SimVerb-3500, a set of 3, 500 verb pairs rated on similarity by 843 study participants, and designed to remediate the lack of resources for evaluating verb semantics • ValNorm, which measures the quality of an embedding based on how well it reflects the valence norms of the language on which was trained Finally, we report results on a sentence-level task, the Semantic Textual Similarity (STS) Benchmark, a set of 8, 628 sentence pairs derived from SemEval STS tasks between 2012 and 2017 and rated on similarity For comparison of our results on CWE anisotropy with the prior work of Ethayarajh (2019), we encode the text of the SemEval Semantic Textual Similarity tasks from 2012 through 2016 While the CLIP LM is based on the GPT-2 architecture, there are minor differences between the models we examine. We outline our experiments, and discuss our approach for extracting both CWEs and sentence embeddings, and for computing self-similarity. We use the self-similarity formula of Note that cos in Equation 1 refers to cosine similarity, or the angular similarity of two vectors after normalization to unit length, a common method for measuring the semantic similarity of word embeddings. n refers to the number of word embeddings w used in the self-similarity measurement. Following Because As shown in Table Our results also indicate that adding the BOS token in GPT-2 significantly improves results on word-level semantic intrinsic evaluation tasks in the decontextualized setting. ValNorm scores im-prove from .59 to .76 in layer 7, and RG65 scores improve from .01 to .44 in the same layer. On every test, simply adding the BOS token outperforms results reported by Finally, we find that CLIP EOS token embeddings outperform CWEs in the top layer on two of five word-level intrinsic evaluation tasks, and nearly equal the performance of CLIP CWEs on the other three tasks. ValNorm scores fall to .72 for CLIP CWEs in the top layer, but increase to .80 for CLIP EOS token embeddings in that layer; and RG65 scores fall to .70 in the top layer for CLIP CWEs, but increase to .73 for CLIP EOS token embeddings. CWEs lose some of their mutual information with the input word as the model forms predictions about the next word in the sequence Additional visualizations of CLIP and GPT-2 performance on word-level intrinsic evaluation tasks are included in Appendix A. We report layerwise performance using sentence representations obtained from CLIP and GPT-2 on the STS benchmark Finally, we analyze the self-similarity of sentence embeddings from each model using Equation CLIP CWEs are less anisotropic than GPT-2 embeddings, and CLIP outperforms GPT-2 on wordlevel and sentence-level semantic evaluations. As illustrated in Figure As shown in Figure As shown in Figure Our findings are straightforward, but it is not obvious that they should occur. The training objective of CLIP is not to produce high-quality CWEs, or even sentence embeddings. Indeed, CLIP embeddings show that the high anisotropy observed by Our findings suggest that language models trained on visual semantic objectives are likely to privilege the encoding of semantic information, which is essential to matching a caption to an image. The more isotropic representations we observe reflect the objective of the model, which requires differentiating fine-grained semantic information. That models trained on visual semantic objectives would form embeddings to reflect the semantics of a word or sentence more than would a causal language model makes intuitive sense. Through the lens of the training objective, it is more problematic for a causal language model to predict a syntactically invalid continuation of a sentence, such as an incorrect part of speech, than to predict a somewhat unexpected but still syntactically valid continuation of a sentence. When a language model is trained to encode and associate the correct text caption with a matching image, however, the semantic content of the text becomes at least as important as its syntactic properties. Our work shows that a pretraining objective which is both visual semantic and contrastive in nature results in isotropic, highly semantic CWEs and sentence representations, in stark contrast to the representations formed by the same architecture when trained on a language modeling objective. However, further work is needed to address to what extent the results we observe are the result of contrastive training, and to what extent they are the result of visual semantic training. It is possible that a contrastive training objective, wherein the model must discriminate between correct and incorrect options, will result in isotropic and highly semantic embeddings even if both models produce linguistic representations. On the other hand, encoding language for the purpose of performing visual semantic tasks may be particularly important for achieving the effects seen in CLIP, as images lack a grammatical structure and are primarily semantic in composition. Future work might perform a direct assessment between representations obtained from the CLIP LM and representations learned by contrastive text-only models such as those recently introduced by This work examines semantics in contextualized representations without postprocessing, using cosine similarity as the similarity metric. While this is a common experimental design evaluated frequently in prior work, it is not the only way of assessing semantics in contextualized word embeddings. For example, recent work indicates that semantics can be better isolated in language models like GPT-2 by postprocessing and transforming the embedding space using methods such as removing high-magnitude directions with principal component analysis Finally, We find that contrastive visual semantic pretraining produces isotropic CWEs which outperform a language model based on the same architecture on semantic evaluations on both the word level and the sentence level. Our findings indicate that incorporating visual semantic objectives with language models may be useful both to regularize the anisotropy in CWEs and to improve the semantic quality of both word and sentence representations. While the contrastive visual semantic objective of CLIP produces semantically rich representations of natural language, we caution that the model is also known to encode harmful societal biases. | 1,612 | 821 | 1,612 |
Did You Mean...? Confidence-based Trade-offs in Semantic Parsing | We illustrate how a calibrated model can help balance common trade-offs in task-oriented parsing. In a simulated annotator-in-the-loop experiment, we show that well-calibrated confidence scores allow us to balance cost with annotator load, improving accuracy with a small number of interactions. We then examine how confidence scores can help optimize the tradeoff between usability and safety. We show that confidence-based thresholding can substantially reduce the number of incorrect lowconfidence programs executed; however, this comes at a cost to usability. We propose the DidYouMean system (cf. Fig. | Task-oriented dialogue systems Recent work has focused on the calibration of semantic parsing models. Specifically, Stengel-Eskin and Van Durme (2022) benchmarked the calibration characteristics of a variety of semantic parsing models, finding some of them to be well-calibrated, especially on parsing for task-oriented dialogue. Given the relatively well-calibrated nature of these * Work done while at Johns Hopkins University. models, we first examine how they could be used in an annotation interface, with a view to balancing the trade-off between annotation cost and correctness. We simulate a human-in-the-loop (HITL) experiment where high-confidence tokens are automatically annotated and low-confidence tokens trigger a dialogue with an oracle annotator who either picks the correct token from a top-K list or manually inserts it. With a small number of interactions we substantially boost annotator accuracy. A similar trade-off exists between usability and safety in task-oriented user interfaces. We examine how sequence-level model confidence scores can be used to balance this trade-off by reducing the number of incorrect programs executed while also minimizing the number of follow-up user interactions and their cognitive burden. We find that thresholding outputs based on model confidence (i.e. rejecting outputs falling below a tuned threshold) reduces the number of incorrect programs executed by 76% compared to the baseline. However, this comes at a cost to usability, as roughly half the correctly-predicted parses are also rejected. To strike a balance between safety and usability, we introduce the DidYouMean system (cf. Fig. | Our experiments in Section 4 involve a predictive model for human-in-the-loop coding: similar models have been integrated into IDEs, e.g. Datasets Our data is drawn from the SMCalFlow (Semantic Models We use MISO For token confidence estimation, we use the maximum probability across the output vocabulary at each timestep. This has been shown to be a relatively robust confidence estimator in classification Production datasets like SMCalFlow are constantly evolving as new functionalities are added. The expensive and time-consuming nature of annotating data can be mitigated by the use of predictive parsing models which suggest speculative parses for new utterances. However, the model's output can be incorrect, especially given out-of-distribution inputs. We need to ensure that annotators are not introducing errors by overly trusting the model. If the model is well-calibrated, we can use its confidence to reduce such errors. For example, we can alert annotators to low confidence predictions and ask them to intervene Since we do not have access to expert SM-CalFlow annotators, we simulate an oracle humanin-the-loop (HITL) annotator who always provides a correct answer by using the gold annotations provided in the dataset. Specifically, for a given input, we decode the output tokens of a predicted program o 0 , . . . o n normally as long as predictions are confident (above a given threshold). If at time t the confidence p(o t ) falls the threshold, we attempt to match the decoded prefix o 0 , . . . , o t-1 to the gold prefix g 0 , . . . g t-1 . If the prefixes do not match, we count the example as incorrect. If they do match, we replace o t with g t , the gold prediction from our oracle annotator, and continue decoding. We consider three metrics in this experiment: (1) The exact match accuracy of the decoded programs (higher is better). ( Results and Analysis Fig. Section 4 showed that token-level confidence scores can be used to balance speed and correctness in an annotation interface. We see a similar tradeoff between safety and usability in user interfaces using semantic parsing models. Here, we define safety as rejecting unsuccessful programs before executing them. This strict definition is motivated by physical domains: imagine that rather than controlling a digital assistant, a user is guiding a robot via language commands (e.g. The types of requests the agent makes have varying cognitive load on the user: for example, providing confirmation takes less effort than rephrasing. We measure how well we can reject incorrect programs before executing them. Following past work in selective prediction We consider 3 systems. As a baseline, we consider a system that executes everything it predicts (accept); this will result in the highest-possible coverage, but also high risk. We can also use MISO's calibrated nature to improve safety outcomes by tuning a sequence-level confidence threshold for rejecting programs (tuned). We tune on the full validation set using F1; we explore the range [0.0, 1.0) in increments of 0.01, finding 0.40 to be optimal. Finally, we introduce the DidYouMean system for filtering low-confidence programs. For a given utterance, DidYouMean shows the user a paraphrase of the input; the user then decides to accept the parse based on this paraphrase. This allows correctly-predicted low-confidence programs to be accepted and executed, while reducing the user load: making a binary choice to accept a paraphrase is a receptive task, while rephrasing an instruction is a more costly productive task. Details of DidY-ouMean's components are given below. Glossing Model Since users are typically unfamiliar with formats like Lisp, we need to present the user with a natural language paraphrase -or gloss -of the candidate parse. To train a glossing model, we modify Roy et al. ( DidYouMean System When a low-confidence parse is detected, DidYouMean triggers a dialogue with the user in order to recover some usability over simply rejecting all low-confidence parses. Fig. User Study We conduct a static user study of DidYouMean with examples from the SMCalFlow validation set. We sample 100 MISO predictions with a confidence below 0.6 (to ensure that the set contains a number of mistakes). This sample is stratified across 10 equally-spaced bins with 10 samples per bin. MTurk annotators were shown the dialogue history, the user utterance, and the gloss, and asked to confirm that the gloss matched the utterance's intent. The template and instructions can be seen in Appendix B. We obtained three judgments from three different annotators per example. In total eight annotators participated in the task, with four completing the majority of tasks. For each example, all three annotators agreed on 79% examples, indicating the task is well-formulated. For the remaining 21%, we use the majority decision to accept or reject. After majority voting, annotators accepted 68/100 glosses and rejected 32. Results Table The "chosen" system, while better in F1, is comparable to the "tuned" system in F0.5, which takes both usability and safety into account but prioritizes safety at a 2:1 ratio. Users are able to recover some usability (as measured by coverage) in this setting but also add to the risk, which is higher for "chosen" than "tuned". The number of incorrect programs executed increases when glosses are chosen (as compared to the tuned threshold). When the accepted glosses are re-parsed, we see a shift back towards a system favoring safety, with fewer incorrect programs being executed than in the "chosen" setting; this is reflected in a lower risk score. For both F1 and F0.5, the "re-parsed" system best balances usability and safety. These results show that a calibrated model can be used with a threshold to greatly improve safety, reducing the number of incorrect programs accepted by 76%. DidYouMean allows users to recover some low-confidence programs by accepting and rejecting programs based on their glosses, resulting in the best aggregated scores. Note also that the threshold was tuned on F1 score on the entire dev set. This means that the F1 performance of that tuned system is as high as possible for confidence-threshold-based system. Thus, DidY-ouMean achieves a balance outside what can be achieved by tuning: simply increasing the threshold would decrease safety and result in a lower F1 than the current threshold of 0.40. In Section 5 we presented the DidYouMean system, which allowed users to confirm or reject a potential gloss. The gloss was chosen from a larger set of candidates; this naturally raises the question of whether users can directly choose a gloss from the candidate list, rather than accepting or rejecting a single gloss at a time. Here, we examine to what extent it is helpful to users to choose glosses from a list of options. We take a sample of validation programs for SMCalFlow, stratified across 10 evenly-spaced confidence bins from 0.0 to 1.0. Note that this differs from the setting in Section 5, where the maximum bin was 0.6. We then present the top predictions decoded with nucleus sampling Annotators were asked to choose one of the top predictions or to manually rewrite the query if none of the predictions were adequate; the chosen or rewritten query was then re-parsed and compared to the gold parse. Further annotation details are given in Appendix C. Of the 100 examples sampled, annotators manually rewrote 7 and chose from the top-k list for the other 93. Ignoring the rewritten examples, 39 model predictions were incorrect and 54 were correct; by choosing glosses, annotators correct 5 incorrect predictions. However, they also inadvertently changed 4 correct predictions to incorrect. Figure Fig. We examine two common trade-offs in semantic parsing, and how a well-calibrated model can be used to balance them. In Section 4 we illustrated how token-level model confidences could be used in a simulated HITL task-oriented parsing annotation task. Our experiments in Section 5 extended these results to sequence-level confidences and nonexpert users; we found that model confidence could be used to improve the usability-safety trade-off and introduced DidYouMean, which improved usability by asking users to accept predictions. Our study is limited by the models, datasets, and languages we consider. Firstly, we examine only English datasets, limiting the impact of our results. We also only consider one task-oriented parsing dataset, and focus on one model architecture. We make several limiting assumptions in Section 4 and Section 5. Foremost amongst these is the assumption of access to an oracle annotator in Section 4; clearly, no such annotator exists. Our results may vary when real annotators are brought into the loop. For one, we do not know exactly how choosing from the top-k list will compare to insertion w.r.t. speed. We also do not know how automation bias The experiments in Section 5 rely on a glossing model to translate predicted programs into natural language (NL). We approached with a neural Lispto-NL model; this has several limitations. Neural text generation models often hallucinate outputs, i.e. generated glosses may not be faithful to their corresponding programs. Unlike Fang et al. ( Fig. The instructions asked users to read the paraphrase produced by the agent and determine whether the agent had correctly understood the user. Annotators were recruited from a list of qualified annotators on Amazon Mechanical Turk and paid $0.05 per example, averaging to roughly $16 per hour. The template for the selection study is shown in Fig. this was only to be done in cases where none of the options reflected the intended meaning of the original utterance. | 606 | 1,651 | 606 |
Regularized Structured Perceptron: A Case Study on Chinese Word Segmentation, POS Tagging and Parsing | Structured perceptron becomes popular for various NLP tasks such as tagging and parsing. Practical studies on NLP did not pay much attention to its regularization. In this paper, we study three simple but effective task-independent regularization methods: (1) one is to average weights of different trained models to reduce the bias caused by the specific order of the training examples; (2) one is to add penalty term to the loss function; | Structured perceptron is a linear classification algorithm. It is used for word segmentation The averaged perceptron or the voted perceptron Regularization is to improve the ability of generalization and avoid over-fitting for machine learning algorithms including online learning algorithms In this paper, we treat the perceptron algorithm as a special case of the stochastic gradient descent (SGD) algorithm and study three kinds of simple but effective task-independent regularization methods that can be applied. The averaging method is to average the weight vectors of different models. We propose a "shuffle-and-average" method to reduce the bias caused by the specific order of the training examples. The traditional penalty method is to add penalty term to the loss function. The dropout method is to randomly corrupt the data flow during training. We show that this dropout method originally used in neural network also helps the structured perceptron. In Section 2, we describe the perceptron algorithm as a special case of the stochastic gradient descent algorithm. Then we discuss three kinds of regularization methods for structured perceptron in Section 3, 4 and 5, respectively. Experiments conducted in Section 6 shows that these regularization methods and their combinations improve performances of NLP tasks such as Chinese word segmentation, POS tagging and dependency parsing. Applying proper regularization methods, the error reductions of these NLP tasks can be up to 10%. We finally conclude this work in Section 7. | We treat the structured perceptron architecture as a multi-layer feed-forward neural network as in Figure The network of the structured perceptron has three layers. The input vector x and output vector y of the structured classification task are concatenated as the input layer. The hidden layer is the feature vector Φ(x, y). The connections between the input layer and the hidden layer are usually hand-crafted and fixed during training and predicting. And the output layer of the network is a scalar w • Φ(x, y) which is used to evaluate the matching of the vector x and y. Besides the common process to calculate the output layer given the input layer, there is a process called decoding, which is to find a vector z to maximum the activation of the output layer: By carefully designing the feature vector, the decoding can be efficiently performed using dynamic programming. Beam search is also commonly used for the decoding of syntactical parsing tasks. In the predicting precess, the vector z is the structured output corresponding to x. In the training precess, what we expect is that for every input x i , the vector z i that maximums the activation of the output layer is exactly the gold standard output y i . We define the loss function as the sum of the margins of the whole training data: where The unconstrained optimization problem of the training process is The loss function is not convex but calculating the derivative is easy. One of the algorithms to solve this optimization problem is SGD. Here we use the minibatch with size of 1, which means in every iteration we use only one training example to approximate the loss function and the gradient to update the weight vector: (5) where w (t) is the weight vector after t updates. Note that in this case, the learning rate η can be set to an arbitrary positive real number. In the perceptron algorithm commonly used in NLP First, we investigate the effect of averaging techniques for regularization. Figure For the Chinese word segmentation task which is a relatively simple task, averaging about five different models can achieve the best effect; whereas for POS tagging and parsing, averaging more models will continually increase the performance even when the number of models approaches 10. The dotted lines in Figure using Equation ( Averaging the weight vectors in the learning process is one of the most popular regularization techniques of the structured perceptron Suppose the learning algorithm stopped after T updates. The final weight vector is calculated as: The intuition might be that the learned weight vector is dependent on the order of the training examples. The final vector w (T ) may be more appropriate for the last few training examples than the previous ones. The averaging method is used to avoid such tendency. Similar treatment is used in other sequential algorithm such as the Markov chain Monte Carlo sampling method. Since this regularization technique is widely used and tested, it is used for all the models in the experiments of this paper. Any other regularization methods are applied to this basic averaged perceptron. As we has mentioned that the learned weight vector is strongly dependent on the order of the training examples, randomly shuffling the training examples results in different weight vectors. Based on such observation, we training different weight vectors using the same training examples with different orders, and average them to get the final weight vector. We use this method to further minimize the side effect caused by this online algorithm. Suppose we shuffle and train n different weight vectors w [1] , . . . , w [n] , the j-th component of the final vector can be simply calculated as Note that generally these models do not share the same feature set. Features may be used in one model but not in another one. When w [i] j = 0, it does not imply that this feature has no effect on this problem. It only implies that this feature does not have chances to be tested. We propose a modified equation to only average the non-zero components: This equation makes the low-frequency features more important in the final model. Here we investigate the penalty techniques for regularization only using the character-based Chinese word segmentation task. Figure With appropriate hyper-parameters, the performances are increased. According to the performances, adding L2 penalty is slightly better than adding L1 penalty or adding cumulative L1 penalty. We then combine the "shuffle-and-average" method with the penalty methods. The performances (solid lines in Figure We can add a square of the L2-norm of the weight vector as the penalty term to the loss function as where λ 2 is a hyper-parameter to determine the strength of the penalty. In the SGD algorithm, the update method of the weight vector is thus The term (1 -ηλ 2 ) is used to decay the weight in every updates. This forces the weights to be close to zero. Another commonly used penalty term is the L1norm of the weight vector. This kinds of terms usually results in sparse weight vector. Since the averaged perceptron is used, the final averaged weight vector will not be sparse. The loss function using the L1-nrom penalty is where λ 1 is the hyper-parameter to determine the strength of the penalty. The derivative of the penalty term is discontinuous. We update the weights as This ensures that the weight decay will not change the sign of the weight. An modified version of the L1 penalty for the online learning is the cumulative L1 penalty And the cumulative penalty is calculated separately In the second step, |w i | and c i are compared and at most one of them is non-zero before the next update 5 Dropout Dropout Models not so deep such as the structured perceptron may also benefit from this idea. Following the dropout method used in neural network, we give the similar method for structured perceptron. We can perform dropout for structured perceptron by corrupting the input layer in Figure where n i ∼ Bern(p) obey a Binomial distribution with the hyper-parameter p. During training, the decoding processing with the corrupted input is The x in the loss function is also substituted with the corrupted version x. Note that the corruption decreases the number of non-zero components of the feature vector Φ, which makes the decoding algorithm harder to find the gold standard y. For NLP tasks, the input vector x could be a sequence of tokens (words, POS tags, etc.). The corruption substitutes some of the tokens with a special token null. Any features contain such token will be omitted (This is also the case for the out-of-vocabulary words during predicting). So the dropout of x in NLP during training can be explained as to randomly mask some of the input tokens. The decoder algorithm needs to find out the correct answer even if some parts of the input are unseen. This harder situation could force the learning algorithm to learn better models. The dropout can also be performed at the hidden layer. Likewise, the components of the corrupted feature vector Φ is calculated as where m i ∼ Bern(q) obey a Binomial distribution with the hyper-parameter q. The Φ in the decoding processing during training and the loss function is substituted with Φ. In this section, we first introduce three NLP tasks using structured perceptron namely Chinese word segmentation, POS tagging and dependency parsing. Then we investigate the effects of regularization methods for structured perceptron mainly on the development set of character-based Chinese word segmentation. Finally, we compare the final performances on the test sets of these three tasks using regularization methods with related work. Table Structure perceptron with feature templates in Figure Table To compare with the perceptron algorithm, we use the conditional random field model (CRF) with the same feature templates in Table We further re-implemented a word-based Chinese word segmentation model with the feature templates following The second task is joint Chinese word segmentation and POS tagging. This can also be modeled as a character-based sequence labeling task. The tag set is a Cartesian product of the tag set for Chinese word segmentation and the set of POS tags. For example, the tag B-NN indicates the character is the first character of a multi-character noun. The tag sequence y = B-NR M-NR E-NR S-DEG B-NN E-NN, (24) for the input sentence in Equation ( The same feature templates shown in Table Table The performance of the parsing model is also improved by using more regularization methods, although the improvement is not as remarkable as those for tagging tasks. For the parsing tasks, there are many other factors that impact the performance. We also investigate the dropout method for regularization using the character-based Chinese word segmentation task. Figure changes the weights to learn different representations for the input layer. On the other hand, the dropout for the input layer improves the performance. Combining the dropout and the "shuffleand-average" method, the performance is further improved. Figure The results of the POS tagging models on the CTB5 corpus are shown in Table If we define the error rate as 1 -jf, the error reduction by applying regularization methods for the character-based model is more than 10%. Comparing to the related work, the character-based model that we used is quite simple. But using the regularization methods discussed in this paper, it provides a comparable performance to the best model in the literature. The "shuffle-and-average" method can effectively reduce the bias caused by the specific order of the training examples. It can improve the performance even if some other regularization methods are applied. When we treat the perceptron algorithm as a special case of the SGD algorithm, the traditional penalty methods can be applied. And our observation is that L2 penalty is better than L1 penalty. The dropout method is derived from the neural network. Corrupting the input during training improves the ability of generalization. The effects of the penalty method and the dropout method have some overlap. Experiments showed that these regularization methods help different NLP tasks such as Chinese word segmentation, POS tagging and dependency parsing. Applying proper regularization methods, the error reductions for some of these NLP tasks can be up to 10%. We believe that these methods can also help other models which are based on structured perceptron. | 440 | 1,538 | 440 |
Deterministic shift-reduce parsing for unification-based grammars by using default unification | Many parsing techniques including parameter estimation assume the use of a packed parse forest for efficient and accurate parsing. However, they have several inherent problems deriving from the restriction of locality in the packed parse forest. Deterministic parsing is one of solutions that can achieve simple and fast parsing without the mechanisms of the packed parse forest by accurately choosing search paths. We propose (i) deterministic shift-reduce parsing for unification-based grammars, and (ii) best-first shift-reduce parsing with beam thresholding for unification-based grammars. Deterministic parsing cannot simply be applied to unification-based grammar parsing, which often fails because of its hard constraints. Therefore, it is developed by using default unification, which almost always succeeds in unification by overwriting inconsistent constraints in grammars. | Over the last few decades, probabilistic unification-based grammar parsing has been investigated intensively. Previous studies Feature forests have been used successfully for probabilistic HPSG and CCG In this paper, we investigate shift-reduce parsing approach for unification-based grammars without the mechanisms of the packed parse forest. Shift-reduce parsing for CFG and dependency parsing have recently been studied Sections 2 and 3 explain unification-based grammars and default unification, respectively. Shift-reduce parsing for unification-based grammars is presented in Section 4. Section 5 discusses our experiments, and Section 6 concludes the paper. | A unification-based grammar is defined as a pair consisting of a set of lexical entries and a set of phrase-structure rules. The lexical entries express word-specific characteristics, while the phrase-structure rules describe constructions of constituents in parse trees. Both the phrasestructure rules and the lexical entries are represented by feature structures In the experiments, we used an HPSG Default unification was originally investigated in a series of studies of lexical semantics, in order to deal with default inheritance in a lexicon. It is also desirable, however, for robust processing, because (i) it almost always succeeds and (ii) a feature structure is relaxed such that the amount of information is maximized 𝐹 ⊔ 𝐺 = 𝐹 ⊔ 𝐺′ 𝐺 ′ ⊑ 𝐺 is maximal such that 𝐹 ⊔ 𝐺 ′ is defined 𝐹 is called a strict feature structure, whose information must not be lost, and 𝐺 is called a default feature structure, whose information can be lost but as little as possible so that 𝐹 and 𝐺 can be unified. Credulous default unification is greedy, in that it tries to maximize the amount of information from the default feature structure, but it results in a set of feature structures. Skeptical default unification simply generalizes the set of feature structures resulting from credulous default unification. Skeptical default unification thus leads to a unique result so that the default information that can be found in every result of credulous default unification remains. The following is an example of skeptical default unification: Copestake mentioned that the problem with Carpenter's default unification is its time complexity 𝐹 ⊔ 𝐺 = 𝐻 ⊔ ⨆ 𝐹 𝐹 ∈ 𝑃𝑉(𝐺)and there is no 𝐹 ′ ∈ 𝑃𝑉(𝐺) such that 𝐻 ⊔ 𝐹 ′ is defined and 𝐻 ⊔ 𝐹 ⊔ 𝐹 ′ is not defined , where 𝐻 = 𝐹 ⊔ ⨆ 𝑃𝐸(𝐺). Copestake's default unification works efficiently because all path equations in the default feature structure are unified with the strict feature structures, and because the unifiability of path values is checked one by one for each node in the result of unifying the path equations. The implementation is almost the same as that of normal unification, but each node of a feature structure has a set of values marked as "strict" or "default." When types are involved, however, it is not easy to find unifiable path values in the default feature structure. Therefore, we implemented a more simply typed version of Corpestake's default unification. Figure Non-deterministic shift-reduce parsing for unification-based grammars has been studied by Briscoe and Carroll The deterministic shift-reduce parsing algorithm for unification-based grammars mainly comprises two data structures: a stack S, and a queue W. Items in S are partial parse trees, including a lexical entry and a parse tree that dominates the whole input sentence. Items in W are words and POSs in the input sentence. The algorithm defines two types of parser actions, shift and reduce, as follows. • Shift: A shift action removes the first item (a word and a POS) from W. Then, one lexical entry is selected from among the candidate lexical entries for the item. Finally, the selected lexical entry is put on the top of the stack. Shift Features • Binary Reduce: A binary reduce action removes two items from the top of the stack. Then, partial parse trees are derived by applying binary rules to the first removed item and the second removed item as a right daughter and left daughter, respectively. Among the candidate partial parse trees, one is selected and put on the top of the stack. • Unary Reduce: A unary reduce action removes one item from the top of the stack. Then, partial parse trees are derived by applying unary rules to the removed item. Among the candidate partial parse trees, one is selected and put on the top of the stack. Parsing fails if there is no candidate for selection (i.e., a dead end). Parsing is considered successfully finished when W is empty and S has only one item which satisfies the sentential condition: the category is verb and the subcategorization frame is empty. Parsing is considered a non-sentential success when W is empty and S has only one item but it does not satisfy the sentential condition. In our experiments, we used a maximum entropy classifier to choose the parser's action. Figure The deterministic parsing can fail because of its grammar's hard constraints. So, we use default unification, which almost always succeeds in unification. We assume that a head daughter (or, an important daughter) is determined for each binary rule in the unification-based grammar. Default unification is used in the binary rule application in the same way as used in Ninomiya's offline robust parsing In the experiments, we had no failure in the binary rule application with default unification. Another approach for recovering from the parsing failure is backtracking. When parsing fails or ends with non-sentential success, the parser's state goes back to some old state (backtracking), and it chooses the second best action and tries parsing again. The old state is selected so as to minimize the difference in the probabilities for selecting the best candidate and the second best candidate. We define a maximum number of backtracking steps while parsing a sentence. Backtracking repeats until parsing finishes with sentential success or reaches the maximum number of backtracking steps. If parsing fails to find a parse tree, the best continuous partial parse trees are output for evaluation. From the viewpoint of search algorithms, parsing with backtracking is a sort of depth-first search algorithms. Another possibility is to use the best-first search algorithm. The best-first parser has a state priority queue, and each state consists of a tree stack and a word queue, which are the same stack and queue explained in the shift-reduce parsing algorithm. Parsing proceeds by applying shift-reduce actions to the best state in the state queue. First, the best state is re-moved from the state queue, and then shiftreduce actions are applied to the state. The newly generated states as results of the shift-reduce actions are put on the queue. This process repeats until it generates a state satisfying the sentential condition. We define the probability of a parsing state as the product of the probabilities of selecting actions that have been taken to reach the state. We regard the state probability as the objective function in the best-first search algorithm, i.e., the state with the highest probabilities is always chosen in the algorithm. However, the best-first algorithm with this objective function searches like the breadth-first search, and hence, parsing is very slow or cannot be processed in a reasonable time. So, we introduce beam thresholding to the best-first algorithm. The search space is pruned by only adding a new state to the state queue if its probability is greater than 1/b of the probability of the best state in the states that has had the same number of shift-reduce actions. In what follows, we call this algorithm beam search parsing. In the experiments, we tested both backtracking and beam search with/without default unifi-cation. Note that, the beam search parsing for unification-based grammars is very slow compared to the shift-reduce CFG parsing with beam search. This is because we have to copy parse trees, which consist of a large feature structures, in every step of searching to keep many states on the state queue. In the case of backtracking, copying is not necessary. We evaluated the speed and accuracy of parsing with Enju 2.3β, an HPSG for English We measured the accuracy of the predicate argument relation output of the parser. A predicate-argument relation is defined as a tuple 〈𝜎, 𝑤 , 𝑎, 𝑤 〉, where 𝜎 is the predicate type (e.g., The performance of "beam(403.4)" was evaluated to see the limit of the performance of the beam-search parsing. Deterministic parsing without default unification achieved accuracy with an LF of around 79.1% (Section 23, gold POS). With backtracking, the LF increased to 83.6%. Figure For comparison with previous studies using the packed parse forest, the performances of Miyao's parser, Ninomiya's parser, Matsuzaki's parser and Sagae's parser are also listed in Table We have presented shift-reduce parsing approach for unification-based grammars, based on deterministic shift-reduce parsing. First, we presented deterministic parsing for unification-based grammars. Deterministic parsing was difficult in the framework of unification-based grammar parsing, which often fails because of its hard constraints. We introduced default unification to avoid the parsing failure. Our experimental results have demonstrated the effectiveness of deterministic parsing with default unification. The experiments revealed that deterministic parsing with default unification achieved high accuracy, with a labeled F-score (LF) of 87.6% for Section 23 of the Penn Treebank with gold POSs. Second, we also presented the best-first parsing with beam search for unification-based grammars. The best-first parsing with beam search achieved the best accuracy, with an LF of 87.0%, in the settings without default unification. Default unification further increased LF from 87.0% to 88.5%. By widening the beam width, the best-first parsing achieved an LF of 90.0%. | 883 | 664 | 883 |
Discrete Latent Variable Representations for Low-Resource Text Classification | While much work on deep latent variable models of text uses continuous latent variables, discrete latent variables are interesting because they are more interpretable and typically more space efficient. We consider several approaches to learning discrete latent variable models for text in the case where exact marginalization over these variables is intractable. We compare the performance of the learned representations as features for lowresource document and sentence classification. Our best models outperform the previous best reported results with continuous representations in these low-resource settings, while learning significantly more compressed representations. Interestingly, we find that an amortized variant of Hard EM performs particularly well in the lowest-resource regimes. 1 | Deep generative models with latent variables have become a major focus of NLP research over the past several years. These models have been used both for generating text At the same time, deep generative models with discrete latent variables are attractive because the latents are arguably more interpretable, and because they lead to significantly more compressed representations: A representation consisting of M floating point values conventionally requires M × 32 bits, whereas M integers in {1, . . . , K} requires only M × log 2 K bits. Unfortunately, discrete latent variable models have a reputation for being more difficult to learn. We conduct a thorough comparison of several popular methods for learning such models, all within the framework of maximizing the evidence lower bound (ELBO) on the training data. In particular, we compare learning such models either with a Vector Quantized-VAE (van den Our classification experiments distinguish between (1) the setting where the classifier must consume only the discrete representation associated with each sentence (i.e., the discrete assignment that maximizes the approximate posterior), and (2) the setting where the classifier may consume the embeddings of this discrete representation learned by the VAE encoder. Note that the former setting is more flexible, since we need only store a sentence's discrete representation, and are therefore free to use task-specific (and possibly much smaller) architectures for classification. In case (1), we are able to effectively match the performance of | Our work builds on recent advances in discrete representation learning and its applications. In particular, we are inspired by recent success with VQ-VAEs outside NLP In addition to exploring the viability of VQ-VAEs for text representation learning, an important part of this paper is a systematic comparison between different discretization techniques. Gumbel-Softmax To demonstrate the usefulness of our models, we focus on improving low-resource classification performance by pretraining on unlabeled text. Previous best results are obtained with continuous latentvariable VAEs, e.g., VAMPIRE We consider generative models of a sequence x = x 1:T of T word tokens. We assume our latents to be a sequence z = z 1:L of L discrete latent vectors, each taking a value in {1, . . . , K} M ; that is, z ∈ {1, . . . , K} M ×L . As is common in VAE-style models of text, we model the text autoregressively, and allow arbitrary interdependence between the text and the latents. That is, we have p(x, z; θ) = p(z) × T t=1 p(x t | x <t , z; θ), where θ are the generative model's parameters. We further assume p(z) to be a fully factorized, uniform prior: Maximizing the marginal likelihood of such a model will be intractable for moderate values of K, M , and L. So we consider learning approaches that maximize the where q(z | x; φ) is the approximate posterior given by an inference or encoder network with parameters φ. The approaches we consider differ in terms of how this approximate posterior q is defined. Mean-Field Categorical VAE (CatVAE) A standard Categorical VAE parameterizes the approximate posterior as factorizing over categorical distributions that are independent given x. We therefore maximize: where q(z | x; φ)= M m=1 L l=1 q ml (z ml | x; φ), p ml = 1/K, and H is the entropy. We approximate the expectation above by sampling from the q ml , and we use the straight-through gradient estimator and e (m) j ∈ R d is an embedding of the j th discrete value z ml can take on, and enc(x) ml ∈ R d is an encoding corresponding to the ml th latent given by an encoder network. These e (m) j embedding vectors are often referred to as a VQ-VAE's "code book". In our setting, a code book is shared across latent vectors. VQ-VAEs are typically learned by maximizing the ELBO assuming degenerate approximate posteriors as above, plus two terms that encourage the encoder embeddings and the "code book" embeddings to become close. In particular, we attempt to maximize the objective: where sg is the stop-gradient operator, and ẑ = ẑ1:L is the sequence of minimizing assignments ẑm,l for each enc(x) ml . The loss term following the β is known as the "commitment loss". Gradients of the likelihood term with respect to enc(x) are again estimated with the straight-through gradient estimator. Hard EM We train with an amortized form of Hard EM. First we define a relaxed version of z, z, where each zml is a softmax over K outputs (rather than a hard assignment) and is produced by an inference network with parameters φ. 2 In the E-Step, we take a small, constant number of 2 Note this assumes our generative model can condition on such a relaxed latent variable. In VQ-VAE, an alternative to the objective in Equation ( Nearest neighbor index Probability vector gradient steps to maximize log p(x | z; θ) with respect to φ (for a fixed θ). In the M-Step, we take a single gradient step to maximize log p(x | ẑ; θ) with respect to θ, where ẑ contains the elementwise argmaxes of z as produced by the inference network (with its most recent parameters φ). Thus, Hard EM can also be interpreted as maximizing the (relaxed) ELBO. We also note that taking multiple steps in the hard E-step somewhat resembles the recently proposed aggressive training of VAEs Categorical sample Recall that the latent sequence is z = z 1:L , where z l ∈ {1, . . . , K} M . We consider two generative models p(x | z; θ), one where L = T and one where L = 1. Each latent in the former model corresponds to a word, and so we refer to this as a "local" model, whereas in the second model we view the latents as being "global", since there is one latent vector for the whole sentence. We use the following architectures for our encoders and decoder, as illustrated in Figure The encoder (parameterized by φ) maps an example x to the parameters of an approximate posterior distribution. Our encoder uses a single-layer Transformer Mean-Field Categorical VAE For the local model, we obtain the parameters of each categorical approximate posterior q mt as softmax(W m h t ), where each W m ∈ R K×d is a learned projection. For the global model, we obtain the parameters of each categorical approximate posterior q m1 as softmax t Wm ht T ; that is, we pass token-level h t vectors through learned projections W m , followed by mean-pooling. VQ-VAE For the local model, let d = d/M . We obtain enc(x) mt , the encoding of the mt th latent variable, as h t,(m-1) d:m d, following Hard EM We use the same encoder architecture as in the mean-field Categorical VAE case. Note, however, that we do not sample from the resulting categorical distributions. Rather, the softmax distributions are passed directly into the decoder. In the case of the mean-field Categorical VAE, we obtain a length-L sequence of vectors z l ∈ {1, . . . , K} M after sampling from the approximate posteriors. For the VQ-VAE, on the other hand, we obtain the sequence of ẑl vectors by taking the indices of the closest code book embeddings, as in Equation (1). In both cases, the resulting sequence of discrete vectors is embedded and consumed by the decoder. In particular, when learning with a VQ-VAE, the embedding of ẑml is simply e (m) ẑml , whereas for the Categorical VAE each discrete latent is embedded using a trained embedding layer. In the local model, when M > 1, we concatenate the M embeddings to form a single real vector embedding for the l th latent variable. In the global model, we use the M embeddings directly. This resulting sequence of T or M real vectors is then viewed as the source side input for a standard 1-layer Transformer encoderdecoder model As above, for Hard EM, we do not obtain a sequence of discrete vectors from the encoder, but rather a sequence of softmax distributions. These are multiplied into an embedding layer, as in the Categorical VAE case, and fed into the Transformer encoder-decoder model. Similar to 1. Pretraining an encoder-decoder model on indomain unlabeled text with an ELBO objective, with early stopping based on validation perplexity. 2. Fixing the encoder to get discrete latents for the downstream classification task, and training a small number of task-specific parameters on top, using varying amounts of labeled data. As noted in the introduction, we consider both reembedding these latents from scratch, or using the embeddings learned by the encoder. The datasets we use for classification are AG News, DBPedia, and Yelp Review Full In preprocessing, we space tokenize, lowercase, and clean the text as in 6 Experimental Details We first experiment with three common text models: CBOW In our experiments, we use Transformer layers with d model = 64. For optimization, we use Adam We find that using the discrete analytic KL divergence term directly in the ELBO objective leads to posterior collapse. The KL term vanishes to 0 and the q ml distributions converge to the uniform priors. To circumvent this, we modify the KL term to be max(KL, λ). This is known as Free Bits We vary the number of gradient steps in the E-step in {1, 3}. At evaluation time, we always take the argmax of z to get a hard assignment. In Figure Table In Figure To gain a better understanding of what the learned clusters represent, we examine their patterns on the AG News dataset labeled with four classes. Since VQ-VAEs and Categorical VAEs exhibit similar patterns, we focus on the latter model. Tables We see that clusters correspond to topical aspects of the input (either a document or a word). In particular, in the sentence-level case, documents in the same cluster often have the same ground-truth label. We also find that each of M latents independently corresponds to topical aspects (e.g., z 1 = 65 implies that the topic has to do with technology); thus, taking the combination of these latents seems to make the cluster "purer". The word-level clusters are also organized by topical aspects (e.g., many words in cluster 510 are about modern conflicts in the Middle East). While Hard EM achieves impressive performance when reembedding from scratch and when training on only 200 or 500 examples, we wonder whether this performance is due to the alternating optimization, to the multiple E-step updates per M-step update, or to the lack of sampling. We accordingly experiment with optimizing our VQ-VAE and Cat-VAE variants in an alternating way, allowing multiple inference network updates per update of the generative parameters θ. We show the results on the AG News dataset in Table We briefly discuss in what sense discrete latent representations reduce storage requirements. Given a vocabulary of size 30,000, storing a T -length sentence requires T log 2 30000 ≈ 14.9T bits. Our models require at most M L log 2 K bits to represent a sentence, which is generally smaller, and especially so when using a global representation. It is also worth noting that storing a d-dimensional floating point representation of a sentence (as continuous latent variable approaches might) costs 32d bits, which is typically much larger. While the above holds for storage, the space required to classify a sentence represented as M L integers using a parametric classifier may not be smaller than that required for classifying a sentence represented as a d-dimensional floating point vector. On the other hand, nearest neighbor-based methods, which are experiencing renewed interest In the classification experiments of Section 5, we evaluated our discrete representations by training a small classifier on top of them. Here we evaluate our global discrete representations in a document retrieval task to directly assess their quality; we note that this evaluation does not rely on the learned code books, embeddings, or a classifier. In these experiments we use each document in the development set of the AG News corpus as a query to retrieve 100 nearest neighbors in the training corpus, as measured by Hamming distance. We use average label precision, the fraction of retrieved documents that have the same label as the query document, to evaluate the retrieved neighbors. We compare with baselines that use averaged 300d pretrained word vectors (corresponding to each token in the document) as a representation, where neighbors are retrieved based on cosine or L 2 distance. We use GloVe with a 2.2 million vocabulary In Figure We have presented experiments comparing the discrete representations learned by a Categorical VAE, a VQ-VAE, and Hard EM in terms of their ability to improve a low-resource text classification system, and to allow for nearest neighbor-based document retrieval. Our best classification models are able to outperform previous work, and this remains so even when we reembed discrete latents from scratch in the learned classifier. We find that amortized Hard EM is particularly effective in lowresource regimes when reembedding from scratch, and that VQ-VAE struggles in these settings. | 796 | 1,558 | 796 |
Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-way Attentions of Auto-analyzed Knowledge | Chinese word segmentation (CWS) and partof-speech (POS) tagging are important fundamental tasks for Chinese language processing, where joint learning of them is an effective one-step solution for both tasks. Previous studies for joint CWS and POS tagging mainly follow the character-based tagging paradigm with introducing contextual information such as n-gram features or sentential representations from recurrent neural models. However, for many cases, the joint tagging needs not only modeling from context features but also knowledge attached to them (e.g., syntactic relations among words); limited efforts have been made by existing research to meet such needs. In this paper, we propose a neural model named TWASP for joint CWS and POS tagging following the character-based sequence labeling paradigm, where a two-way attention mechanism is used to incorporate both context feature and their corresponding syntactic knowledge for each input character. Particularly, we use existing language processing toolkits to obtain the auto-analyzed syntactic knowledge for the context, and the proposed attention module can learn and benefit from them although their quality may not be perfect. Our experiments illustrate the effectiveness of the two-way attentions for joint CWS and POS tagging, where state-of-the-art performance is achieved on five benchmark datasets. 1 * Partially done as an intern at Sinovation Ventures. † Corresponding author. 1 TWASP (code and the best performing models) is released at | Chinese word segmentation (CWS) and part-ofspeech (POS) tagging are two fundamental and crucial tasks in natural language processing (NLP) for Chinese. The former one aims to find word boundaries in a sentence and the latter, on the top of segmentation results, assigns a POS tag to each word to indicate its syntactical property in the sentence. To effectively perform CWS and POS tagging, combining them into a joint task is proved to have better performance than separately conducting the two tasks in a sequence In addition, it is well known that syntactic structure is also able to capture and provide the information of long-distance dependencies among words. For example, Figure the surface word order, they are much closer in the dependency structure (the subject depends on "报告 VV" and "书 NN" depends on the the object). This example shows that syntactic structure provides useful cues for CWS and POS tagging. Syntactic knowledge can be obtained from manually constructed resources such as treebanks and grammars, but such resources require considerate efforts to create and might not be available for a particular language or a particular domain. A more practical alternative is to use syntactic structures automatically generated by off-the-shelf toolkits. Some previous studies In this paper, we propose a neural model named TWASP with a two-way attention mechanism to improve joint CWS and POS tagging by learning from auto-analyzed syntactic knowledge, which are generated by existing NLP toolkits and provide necessary (although not perfect) information for the task. In detail, for each input character, the proposed attention module extracts the context features associated with the character and their corresponding knowledge instances according to the auto-analyzed results, then computes the attentions separately for features and knowledge in each attention way, and finally concatenates the attentions from two ways to guide the tagging process. In doing so, our model can distinguish the important auto-analyzed knowledge based on their contributions to the task and thus avoid being influenced by some inferior knowledge instances. Compared to another prevailing model, i.e., key-value memory networks | The architecture of TWASP is illustrated in Figure To enhance the backbone paradigm, the proposed two-way attention module (as shown in the right part of Figure Auto-analyzed knowledge is demonstrated to be an effective type of resources to help NLP systems understand the texts be the sublists of S and K for x i . Here, s i,j and k i,j denote a context feature and a knowledge instance, respectively. In this paper, we use three types of syntactic knowledge for the joint task, namely POS labels, syntactic constituents, and dependency relations, where POS labels indicate the syntactic information of individual words, syntactic constituents provide the structural grouping information for a text span, and dependencies offer dependency relations between words. Figure POS Labels Figure Syntactic Constituents As shown in Figure Dependency Relations Given a character x i , let w i be the word that contains x i . The context features S i include w i , w i 's governor, and w i 's dependents in the dependency structure; those words combined with their inbound dependency relation labels form K i . For example, for x 6 ="分", w 6 = "分子", which depends on "结合" with a dependency label dobj. Therefore, S 6 = ["分子", "结 合"], and K 6 = ["分子 obj", "结合 root"]. Attention has been shown to be an effective method for incorporating knowledge into NLP systems For both features and their knowledge instances for X , we use a two-way attention design to have separate attention for S and K. Particularly, the two ways, namely, the feature way and the knowledge way, are identical in architecture, where each way has a feed-forward attention module Take the feature way as an example, the attention 3 Following weight for each context feature s i,j is computed by where h i is the vector from a text encoder for x i and e s i,j the embedding of s i,j . Then we have the weighted embedding a s i for all s i,j in S i via where denotes a element-wise sum operation. For the knowledge way, the same process is applied to get a k i by distinguishing and weighting each knowledge instance k i,j . Finally, the output of the two attention ways are obtained through an concatenation of the two vectors: To functionalize the joint tagging, the two-way attentions interact with the backbone model through the encoded vector h i and its output a i for each x i . For h i , one can apply many prevailing encoders, e.g., Bi-LSTM or BERT Once a i is obtained, we concatenate it with h i and send it through a fully connected layer to align the dimension of the output for final prediction: where W and b are trainable parameters. Afterwards, conditional random fields (CRF) is used to estimate the probability for y i over all possible joint CWS and POS tags under x i and y i-1 by Here, W c and b c are the weight matrix and the bias vector, respectively, and they are estimated using the (y i-1 , y i ) tag pairs in the gold standard. 3 Experiments We employ five benchmark datasets in our experiments, where four of them, namely, CTB5, CTB6, CTB7, and CTB9, are from the Penn Chinese TreeBank Table 2: Numbers of context features (S) and their corresponding knowledge instances (K) for five benchmark datasets, based on the output of SCT and BNP. Note that the K for the UD dataset follows the CTB criteria, because SCT and BNP were trained on CTB. To obtain the aforementioned three types of knowledge, we use two off-the-shelf toolkits, Stanford CoreNLP Toolkit (SCT) For the two-way attention module, we randomly initialize the embeddings for all context features and their corresponding knowledge instances, where one can also use pre-trained embeddings In our main experiment, we run our TWASP on the five benchmark datasets using the three encoders, i.e., Bi-LSTM, BERT, and ZEN. The results on the F-scores of word segmentation and joint CWS and POS tagging are in Table There are several observations. First, for all encoders, the two-way attentions provide consistent enhancement to the baselines with different types of knowledge. Particularly, although the baseline model is well-performed when BERT (or ZEN) serves as the encoder, the attention mod- 11 We use the evaluation script from ule is still able to further improve its performance with the knowledge produced by the toolkits even though the toolkits have worse-than-baseline results for the joint task. Second, among different types of knowledge, POS labels are the most effective ones that help the joint task. For instance, among BERT-based models, the one enhanced by POS knowledge from SCT achieves the best performance on most datasets, which is not surprising because such knowledge matches the outcome of the task. In addition, for BERT-based models enhanced by knowledge from BNP (i.e., BERT + POS (BNP) and BERT + Syn. (BNP)), syntactic constituents provide more improvement than POS labels on all CTB datasets. This observation could be explained by that BNP is originally designed for constituency parsing with CTB criteria; the syntactic constituents are complicated while effective when they are accurate. Third, while SCT and BNP were trained on CTB, whose tagset is very different from the two tagsets for UD, TWASP still outperforms the baselines on UD with the knowledge provided by SCT and BNP, indicating that syntactic knowledge is useful even when it uses different word segmentation and POS tagging criteria. Table Domain variance is an important factor affecting the performance of NLP systems The comparison between the baselines and TWASP with POS knowledge clearly shows the consistency of performance improvement with twoway attentions, where for both BERT and ZEN, TWASP outperforms the baselines for all genres on the joint labels. In addition, similar to the observations from the previous experiment, both accurate and inaccurate POS knowledge are able to help the joint task. For example, although the SCT results on several genres (e.g., CS, DF, SC) are much worse than of the BERT baseline, the POS labels produced by SCT can still enhance TWASP on word segmentation and joint tagging with the proposed two-way attention module. In the first analysis, we compare our two-way attention with normal attention. For normal attention, we experiment three ways of incorporating context features and knowledge: (1) using context features and knowledge together in the attention, where all features or knowledge instances are equally treated in it; (2) using context features only; and (3) using knowledge only. We run these experiments with BERT encoder and POS knowledge from SCT on CTB5 and report the results in Table There are other methods for using both context features and knowledge in a neural framework, such as key-value memory networks (kvMN) There are several observations. First, compared to only using one type of knowledge (refer to Table When the toolkit provides accurate knowledge, it is not surprising that our two-way attention model would benefit from the auto-analyzed knowledge. Interestingly, even when the toolkit provides inaccurate output, our model might still be able to benefit from such output. Figure There are basically two approaches to CWS and POS tagging: to perform POS tagging right after word segmentation in a pipeline, or to conduct the two tasks simultaneously, known as joint CWS and POS tagging. In the past two decades, many studies have shown that joint tagging outperforms the pipeline approach In this paper, we propose neural approach with a two-way attention mechanism to incorporate autoanalyzed knowledge for joint CWS and POS tagging, following a character-based sequence labeling paradigm. Our proposed attention module learns and weights context features and their corresponding knowledge instances in two separate ways, and use the combined attentions from the two ways to enhance the joint tagging. Experimental results on five benchmark datasets illustrate the validity and effectiveness of our model, where the two-way attentions can be integrated with different encoders and provide consistent improvements over baseline taggers. Our model achieves stateof-the-art performance on all the datasets. Overall, this work presents an elegant way to use autoanalyzed knowledge and enhance neural models with existing NLP tools. For future work, we plan to apply the same methodology to other NLP tasks. | 1,509 | 2,226 | 1,509 |
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation | In this paper, we address the hallucination problem commonly found in natural language generation tasks. Language models often generate fluent and convincing content but lack consistency with the provided source, resulting in potential inaccuracies. We propose a new decoding method called Fidelity-Enriched Contrastive Search (FECS), which augments the Contrastive Search framework with contextaware regularization terms. FECS promotes tokens that are semantically similar to the provided source while penalizing repetitiveness in the generated text. We demonstrate its effectiveness across two tasks prone to hallucination: abstractive summarization and dialogue generation. Results show that FECS consistently enhances faithfulness across various language model sizes while maintaining output diversity comparable to well-performing decoding algorithms. 1 | Language models (LMs) have achieved remarkable success in generating human-like text, fostering advancements across numerous Natural Language Processing (NLP) applications. Despite the fluent and seemingly convincing outputs produced by LMs, these models can occasionally generate content that is factually inconsistent with the provided source 1.3B 2.7B 6.7B turn to a less investigated lens-decoding-to improve faithfulness, We evaluate FECS on two tasks particularly prone to text hallucination: abstractive summarization and dialogue generation | In this section, we present preliminary information on Contrastive Search To address shortcomings in existing decoding methods, Here, V k denotes a set of k candidate tokens with the top-k probability from the model's prediction distribution p θ (•|x 0:c+t ). The model confidence term represents the probability of the candidate token v, while the degeneration penalty term signifies the maximum value of the cosine similarity sim(•, •) between candidate token v and all previously generated tokens {x c , ..., x c+t-1 }. Specifically, sim(•, •) employs the token representation h x i and h v from the model's last hidden state, calculated by appending v to x 0:c+t as model input. α serves as a pre-determined, nonnegative hyper-parameter; when α equals 0, Contrastive Search reduces to greedy decoding. Essentially, Contrastive Search preserves coherence by choosing outputs from the top-k probable candidates while also curbing degeneration behaviors such as repetitions, thereby promoting diversity. Motivated by Contrastive Search, we extend this framework by integrating a faithfulness term that encourages factuality and reduces hallucination. Using the notations from Section 2.1, we define FECS as follows: Consider an input x 0:c+t at time step t, where x 0:c represents the prefix context, and x c:c+t is the previously generated tokens. We further decompose x 0:c into: (1) the prompts x 0:s , and (2) the provided source x s:c , which the output is expected to remain faithful to. FECS generates the next token x c+t via the following formula: The newly introduced faithfulness term rewards candidate tokens exhibiting high semantic similarity to tokens in the source content. Specifically, the faithfulness term denotes the maximum value of the cosine similarity sim(•, •) between the candidate token v and all source tokens {x s , ..., x c-1 }. Here, β is also a pre-determined, non-negative hyperparameter. 3 Experimental Setup We evaluate our method, FECS, on two tasks known for their susceptibility to hallucination issues: abstractive summarization and dialogue generation. For the abstractive summarization task, we adopt CNN-DailyMail (CNN-DM) dataset In our experiments involving abstractive summarization, we adopt OPT Our evaluation process employs the following metrics: Standard Metrics. For assessing the quality of summarization, we employ ROUGE Faithfulness Metrics. To measure factuality in summarization, we use FEQA where Rep-n(x) measures the proportion of n-gram repetitions in x, and is calculated as A higher diversity score suggests the model outputs exhibit less degeneration Table As we discussed in Section 1, model outputs must balance faithfulness and diversity. To better understand the impact of our proposed faithfulness reward on these two facets in the context of the original Contrastive Search, we calculated the improvements in faithfulness and the reductions in diversity based on the results from both the proposed FECS and the Contrastive Search. Latency. To assess the decoding latency of our proposed FECS objective, we report the average decoding time (sec) per instance in Table The role of α. To establish a more comprehensive baseline, we evaluate FECS against Contrastive Search with different values of α on the 6.7B model. Intuitively, a smaller α value (i.e., a lower degree of diversity) might contribute to a more factual performance. However, as shown in Table In addition to the automatic evaluation, we also perform human evaluation to assess the faithfulness of our proposed FECS on the abstractive summarization task. We compare FECS against Contrastive Search, and ask annotators to vote which response is considered more faithful to the provided source (i.e., the text to be summarized). Specifically, we randomly sample 20 instance for each of the three model sizes, with a total of 60 instances for the evaluation. More details including the full evaluation protocol are provided in Appendix A.2. We present the results in Figure This paper introduces a novel decoding approach, Fidelity-Enriched Contrastive Search (FECS), designed to enhance faithfulness in text generation. Our experimental results on abstractive summarization and dialogue generation demonstrated the efficacy of FECS. It consistently improved faithfulness across various LM scales while preserving a level of diversity that is comparable to other leading decoding algorithms. Particularly when using larger LMs, it notably enhances faithfulness with only a minor impact on diversity. This indicates that FECS performs effectively when larger LMs are employed in dialogue generation tasks. In the future, we plan to explore how FECS performs with different kinds of source content, including erroneous or ambiguous inputs. Firstly, while FECS presents an improvement in faithfulness and diversity trade-off, its performance could be influenced by the quality of the source content. The assumption that source content is always correct and complete may not hold true in all scenarios, particularly in cases where the input data is ambiguous, incomplete, or erroneous. Secondly, the faithfulness assessment is primarily quantitative, based on FEQA and Q 2 established metrics. Although these metrics provide an essential standard for comparing models, they may not capture all nuanced aspects of faithfulness, such as the preservation of subtle implications or subjective information. The full human evaluation protocol is presented in Figure | 858 | 548 | 858 |
DOPA METER -A Tool Suite for Metrical Document Profiling and Aggregation | We present DOPA METER, a tool suite for the metrical investigation of written language, that provides diagnostic means for its division into discourse categories, such as registers, genres, and style. The quantitative basis of our system are 120 metrics covering a wide range of lexical, syntactic, and semantic features relevant for language profiling. The scores can be summarized, compared, and aggregated using visualization tools that can be tailored according to the users' needs. We also showcase an application scenario for DOPA METER. | The way how we encode contents in natural language utterances gives rise to linguistic divisions into registers, genres, style levels, etc. (for a thorough distinction of these terms, see In this paper, we address a large variety of such behavioral aspects of language use from a metrical perspective. None of these metrics is new, but their assembly and broad coverage in a coherent tool suite and modular software framework is. We also provide means for summarization, comparison and aggregation of results and their proper visualization. | The tool-based computational analysis of behavioral traits of language use can be divided into three branches of research: (1) readability checkers with language complexity measures incorporating mostly surface-level syntactic and lexicosemantic features of utterances, (2) stylometrics tools with strong emphasis on powerful lexicostatistical metrics, and (3) psychometrics devices with mostly simple frequency-based computations complemented by dictionaries with psychologically typed lexical categories. From the perspective of readability (for a survey, see In the field of stylometrics (for a survey, see The seamless integration of various analytical tools under a common programming framework (making use of R's core library but also extending it by various clustering algorithms and machine learning classifiers) and its public accessibility on GITHUB The third stream of work emphasizes human lexical choice patterns in terms of the psychometrics of word use. Perhaps its most prominent representative is the Linguistic Inquiry and Word Count approach and its associated LIWC engine (Tausczik and Pennebaker, 2010). LIWC was recently compared and outperformed by the SEANCE system Recently, Despite the remarkable progress that has been made already-the proliferation of surface-level, linguistic and cognitive features under scrutiny, and the growing number of metrics making use of them-we observe a fundamental lack of integration of and abstraction from single counts and scores in these precursors. Accordingly, a major goal of our work is to provide reasonable summarization, comparison, and aggregation levels for single metrics so that divisions into registers, genres and styles can be computed on the fly based on the contributions of a wide range of linguistic layers (integrating lexical, syntactic, and semantic features) for complex collections of (multilingual) linguistic data in terms of (sets of) corpora. and supports all SPACY compatible language modules. Our system is publicly accessible via GITHUB. The three-layered architecture of DOPA METER is depicted in Fig. • arbitrarily many text corpora that can also be grouped into collections of corpora which serve as textual input channel (including a preprocessing pipeline), • the feature hub that elicits relevant features from the corpora for use by a large variety of metrics, • and three analytics layers-apart from simple report generation (summaries of metricsderived scores), we offer a comparison mode across documents and corpora, as well as cluster-based aggregation of results. The input for DOPA METER consists of a set of text corpora that can be bundled into collections, for convenience. Each corpus consists of single text files, the documents, each of which will automatically be pre-processed and split into sentences and tokens. The computation of features is divided into (1) simple feature counts whose results feed (2) a collection of metrics. We here distinguish micro statistics (at the document level) and macro statistics (at the corpus level). The feature hub comprises sets of single features and groups them for better comprehensibility (see the discussion below and Table In order to get started we perform basic counts of sentences, tokens, types (vocabulary size), lemmata and characters using SPACY tooling (Corpus/Doc Counts in Fig. n-grams are sequential series of (configurable) n={1,2,3,...} tokens or (PoS) tags. The scores calculate the ratios of n-grams for single documents and whole corpora or corpus collections. Lexical Diversity subsumes a group of 24 features borrowing from stylometric vocabulary metrics. Among others, this includes the common typetoken ratio (TTR), but also more sophisticated metrics such as Guiraud's R or Herdan's C. We also incorporate metrics which address the frequency spectrum of lexical items (e.g., Sichel's S) and ones capturing lexical distributions over the whole document (e.g., Moving-Average TTR). Last but not least, we also provide metrics for lexical density such as the ratio of function words. For surveys of metrics of lexical diversity, see Surface pattern metrics, also known as Readability scores, mainly focus on syllable counts, token and sentence length and thus target surface-level phenomena only. Among the large number of possible choices, we included into DOPA METER 19 metrics, among them Flesch-Kincaid, Dale-Chall (for English, only), SMOG, Gunning fog, and the four Wiener Sachtext formulas Syntax-focused metrics account for the two major syntax representation formats: dependency and constituency. For dependency parsing, we exploit the transition-based dependency parser embedded in SPACY The parse metrics take general parse graph properties into account, such as the average maximum depth for each parse tree, i.e., the longest path from the root node to a leaf node, the maximum fan-out of each parse tree, i.e., the largest number of child nodes of a node in the entire parse tree, and the inverse average out-degree centrality value, i.e., the number of out-going edges, computed over all dependency graphs of all sentences of a document. We here focus on lexico-semantic resources that provide a linkage between lemmas in terms of various semantic relations. Lexicons structured this way can be regarded as semantic networks. Our focus is on relations typically provided by WORD-NET-style specifications which feature synonymy, antonymy, taxonomy (hyponyms/hypernyms), and partonymy (parts and wholes). Based on such knowledge-"heavy" resources we define several metrics that exploit the topological structures spanned in these semantic networks as instantiated by the lexical items we identify as lemmas of these lexicons within each sentence. Accordingly, we defined metrics which focus on relational depth by determining the minimal path length of each reading of each lemma within a document (i.e., the distance from the top node of the semantic network to the lemma) following taxonomic links (hypernymy or hyponymy links, only), sum up these individual length scores and average over the number of all the lemmas' readings, and on semantic richness, i.e., for each (reading of the) lemma in a sentence, we determine all semantic relation instances (i.e., hypernyms, hyponyms, parts (is-part) and wholes (has-part), antonyms) it shares with other lemmas in the lexicon and average this number over all readings per lemma in the document. Scores and their averages are also available for each individual semantic relation only (e.g., the number of hyponyms of all instantiated lemmas). DOPA METER supports scores for the eight fundamental emotional variables (valence, arousal, dominance, joy, anger, sadness, fear and disgust) based on dictionary look-ups incorporating the emotion lexicons from In the summarization mode, statistical reports of the resulting scores are generated per document and corpus (collection), including common information, such as min/max values, means, quartiles, etc. This reporting mode describes fundamental quantitative characteristics in the feature hub and can already pinpoint at differences between documents and corpora that can be deeper explored by larger-scale clustering or classification algorithms. The comparison mode points out differences or similarities between complete text corpora or userdefined subsets therefrom. It is based on a differential analysis of the corpus vocabulary, n-grams and the metrics targeting different levels of linguistic analysis mentioned above. Besides the metrics already introduced, we also make use of well-known distance metrics from the field of stylometrics and authorship detection, e.g., Burrows' ∆ In addition to these stylometric computations, we incorporate scores originating from the field of machine translation, such as BLEU Going beyond the micro statistics at the single document and corpus level, the aggregation mode is able to compute dependencies between different (sets of) corpora at the macro level of analysis. With varying configurations of features, k-means and t-distributed Stochastic Neighbor Embedding (t-SNE) (van der Maaten and Hinton, 2008) with DBSCAN We now illustrate facets of the rich functionality of DOPA METER. Our scenario features two languages, English and German, and a broad application domain (medicine) with six corpora (collections) from a wide range of genres (see en.Clin incorporates public corpora supplied for the I2B2 and N2C2 challenge series, 12 and en.SocMed combines English language TWITTER corpora with biomedical content: BEAR 9 Instructions how to build the corpora in order to reproduce our experiments can be found under 11 The highest syntactic complexity in terms of parse tree depth is attributed to the German expertlayman corpus (expert statements seem to suffer from 'hard' syntax), with no substantial differences for the remaining corpora. The German social media corpus (in contrast to the English one) is the richest in terms of synonyms, whereas both clinical corpora are semantically poor at that level (adhering to canonical medical terminology-the English one being even poorer than the German one). The medical German WIKIPEDIA is in a similar range with German clinical and PUBMED documents on that dimension. To highlight the lexical intersection among corpora, the heatmap in Fig. All these observations indicate that none of the features in isolation is capable of properly predicting specific discourse categories, such as registers or text genres. Hence, a deeper exploration of dependencies between the features we measure seems more appropriate and DOPA METER might be a suitable toolkit for this endeavor. We introduced DOPA METER, a toolkit for quantifying feature distributions at the lexical, syntactic and semantic dimension. We supply 120 metrics for scoring linguistic behavior at these axes. Scores can be summarized, compared, and aggregated using flexibly tailorable visualization tools. DOPA METER's feature collection reflects one main design goal of our work, namely the integration of as many linguistic levels as possible, thus moving away from much more selective approaches in stylometrics and psychometrics. A second unique feature of our approach is its focus on lucid system architecture for flexible system engineering, i.e., easy maintainability and augmentation by new metrics and language resources (corpora, lexicons) in a coherent all-in-one system design. This contrasts with the proliferation of stylometric extensions spread over lots of local GITHUB links lacking further integration, on the one hand, and frozen system packages in the psychometric domain, on the other hand. The source code and its documentation are provided under the open MIT licence and our tool can be conveniently expanded and adapted to specific needs. This way, DOPA METER may be useful as a metadata generator for documents and text corpora, with facilities for quantitative data description (scoring), comparison and aggregation. Such an approach may also pave the way towards an empirically sound way of routinely running NLP data diagnostics DOPA METER uses a wide range of external resources, such as corpora, lexicons or terminology systems with potentially built-in biases. Users of DOPA METER should be sensitive towards potential pitfalls when analyzing data and reporting the results gathered with DOPA METER. DOPA METER combines metrics, e.g., for readability or syntactic complexity, which are commonly used but often lack comparative evaluation. Hidden, and potentially unrecognized or unwarranted, dependencies between them should be carefully considered. Despite our efforts to include at least two languages (English and German), the multilingual dimension needs further elaboration. When doing so one might encounter shortcomings or even gaps for particular languages (e.g., for readability formulae, corpora, terminologies or lexicons). Finally, DOPA METER's aggregation component needs further extension by complementary clustering and ML classification algorithms. This work was supported by BMBF within the SMITH project under grants 01ZZ1803G and 01ZZ1803A such as the GeMTeX project under grant 01ZZ2314D. | 543 | 540 | 543 |
Composing Finite State Transducers on GPUs | Weighted finite state transducers (FSTs) are frequently used in language processing to handle tasks such as part-of-speech tagging and speech recognition. There has been previous work using multiple CPU cores to accelerate finite state algorithms, but limited attention has been given to parallel graphics processing unit (GPU) implementations. In this paper, we introduce the first (to our knowledge) GPU implementation of the FST composition operation, and we also discuss the optimizations used to achieve the best performance on this architecture. We show that our approach obtains speedups of up to 6× over our serial implementation and 4.5× over OpenFST. | Finite-state transducers (FSTs) and their algorithms Composition is one of the most important operations on FSTs, because it allows complex FSTs to be built up from many simpler building blocks, but it is also one of the most expensive. Much work has been done on speeding up composition on a single CPU processor There has also been some successful work on speeding up composition using multiple CPU cores In this paper, we parallelize the FST composition task across multiple GPU cores. To our knowledge, this is the first successful attempt to do so. Our approach treats the composed FST as a sparse graph and uses some techniques from the work of | In this section, we introduce the notation that will be used throughout the paper for the composition task. A weighted FST is a tuple M = (Q, Σ, Γ, s, F, δ), where • Q is a finite set of states. • Σ is a finite input alphabet. • Γ is a finite output alphabet. • s ∈ Q is the start state. • F ⊆ Q are the accept states. Note that we don't currently allow epsilon transitions; this would require implementation of composition filters For the composition task, we are given two weighted FSTs: Call Γ, the alphabet shared between the two transducers, the inner alphabet, and let m = |Γ|. Call Σ and ∆, the input alphabet of M 1 and the output alphabet of M 2 , the outer alphabets. The composition of M 1 and M 2 is the weighted FST That is, for each pair of transitions with the same inner symbol, the composed transducer has a transition Transitions with the same source, target, input, and output symbols are merged, adding their weights. In this section, we describe our composition method and its implementation. If implemented naïvely, the above operation is inefficient. Even if M 1 and M 2 are trim (have no states that are unreachable from the start state or cannot reach the accept state), their composition may have many unreachable states. Figure We expect this problem to be more serious when the FSTs to be composed are sparse, that is, when there are many pairs of states without a transition between them. And we expect that FSTs used in natural language processing, whether they are constructed by hand or induced from data, will often be sparse. For example, below (Section 4.1), we will describe some FSTs induced from parallel text that we will use in our experiments. We measured the sparsity of these FSTs, shown in Table We first present a serial composition algorithm (Algorithm 2). This algorithm performs a breadthfirst search (BFS) of the composed FST beginning from the start state, so as to avoid creating inaccessible states. As is standard, the BFS uses two data structures, a frontier queue (A) and a visited set (Q), which is always a superset of A. For each state q 1 q 2 popped from A, the algorithm composes Set of states created so far 3: δ ← ∅ Transition function 4: while |A| > 0 do 5: O the one all transitions from q 1 with all transitions from q 2 that have the same inner symbol. The composed edges are added to the final transducer, and the corresponding target states q 1 q 2 are pushed into A for future expansion. The search finishes once A runs out of states to expand. Our GPU implementation stores FST transition functions in a format similar to compressed sparse row (CSR) format, as introduced by our previous work • z is the number of transitions with nonzero weight. • R is an array of length |Q|m + 1 containing offsets into the arrays T , O, and P. If the states are numbered 0, . . . , |Q| -1 and the inner symbols are numbered 0, . . . m -1, then state q's outgoing transitions on inner symbol b can be found starting at the offset stored in must equal z. • T [k] is the target state of the kth transition. • O[k] is the outer symbol of the kth transition. • P[k] is the weight of the kth transition. Similarly to several toolkits (such as OpenFST), we require the edges in T, O, P to be sorted by their inner symbols before executing the algorithm, which allows faster indexing and simpler parallelization. Our parallel composition implementation has the same overall structure as the serial algorithm, and is shown in Algorithm 2. The two transducers to be composed are stored on the GPU in global memory, in the format described in Section 3.3. Both transducers are sorted according to their inner symbol on the CPU and copied to the device. The memory requirements for a large transducer complicates the storage of the result on the GPU global memory. If the memory of states and edges generated by both inputs does not fit on the GPU, then the composition cannot be computed using only device memory. The execution time will also be affected if the result lives on the device and there is a limited amount of memory available for temporary variables created during the execution. Therefore, the output transducer must be stored on the host using page-locked memory, with the edge transitions unsorted. Page-locked, or pinned, memory is memory that will not get paged out by the operating system. Since this memory cannot be paged out, the amount of RAM available to other applications will be reduced. This enables the GPU to access the host memory quickly. Pinned memory provides better transfer speeds since the GPU creates different mappings to speed up cudaMemcpy operations on host memory. Allocating pinned memory consumes more time than a regular malloc, List of transitions 4: while |A| > 0 do 5: if h(a, c, r 1 r 2 ) ∈ H then 15: red ← true 16: concatenate δ d to δ for q ∈ A d do push(A, q) 23: if red then 24: sort δ[q 1 q 2 ] 25: reduce therefore it should be done sporadically. In this work, pinned memory is allocated only once at start time and released once the composition has been completed. Using page-locked memory on the host side as well as pre-allocating memory on the device decreases the time to both copy the results back from the GPU, and the time to reuse device structures used on different kernel methods. The frontier queue A is stored on the host. For each state q 1 q 2 popped from A, we need to compose all outgoing transitions of q 1 and q 2 obtained from M 1 and M 2 respectively. Following previous work The outer loop launches a CUDA kernel for each inner symbol b ∈ Γ. For example, to compose the start states in Figure We choose a kernel block size of 32 for the kernel calls since this is the amount of threads that run in parallel on all GPU streaming multiprocessors at any given time. If the number of threads required to compose a tuple of states is not divisible by 32, the number of threads is rounded up to the closest multiple. When several input tuples generate less than 32 edges, multiple cores will remain idle during execution. Our approach obtains better speedups when the input transducers are able to generate a large amount of edges for each symbol b and each state tuple on the result. In general, the kernels may take widely varying lengths of time based on the amount of composed edges; using streams enables the scheduler to minimize the number of idle cores. The two inner loops represent the threads of the kernel; each composes a pair of transitions sharing an inner symbol b. Because these transitions are stored contiguously (Figure Figure The composed transitions are first appended to a pre-allocated buffer δ d on the GPU. After processing the valid compositions leaving q 1 q 2 , all the transitions added in δ d are appended in bulk to δ on the host. Updating frontier and visited set Each destination state r 1 r 2 , if previously unvisited, needs to be added to both A and Q. Instead of adding it directly to A (which is stored on the host), we add it to a buffer A d stored on the device to minimize the communication overhead between the host and the device. After processing q 1 q 2 and synchronizing all streams, A d is appended in bulk to A using a single cudaMemcpy operation. The visited set Q is stored on the GPU device as a lookup table of length |Q 1 ||Q 2 |. Reduction Composed edges with the same source, target, input, and output labels must be merged, summing their probabilities. This is done in lines 23-25, which first sort the transitions and then merge and sum them. To do this, we pack the transitions into an array of keys and an array of values. Each key is a tuple (a, c, r 1 r 2 ) packed into a 64-bit integer. We then use the sort-by-key and reduce-by-key operations provided by the Thrust library. The mapping of tuples to integers is required for the sort operation since the comparisons required for the sorting can be made faster than using custom data structures with a custom comparison operator. 1 Because the above reduction step is rather expensive, lines 14-17 use a heuristic to avoid it if possible. H is a set of transitions represented as a hash table without collision resolution, so that lookups can yield false positives. If red is false, then there were no collisions, so the reduction step can be skipped. The hash function is simply h(a, c, r 1 r 2 ) = a + c|Σ|. In more detail, H actually maps from hashes to integers. Clearing H (line 8) actually just increments a counter i; storing a hash k is implemented as H[k] ← i, so we can test whether k is a member by testing whether H[k] = i. An atomic operation (atomicExch) is used to consistently check H since several threads update this variable asynchronously. We tested the performance of our implementation by constructing several FSTs of varying sizes and comparing our implementation against other baselines. In our previous work Figure GIZA++ Viterbi word alignments. Both were trained on the Europarl corpus. We then precomposed them using the Carmel toolkit Our experiments were tested using two different architectures. The serial code was measured using a 16-core Intel Xeon CPU E5-2650 v2, and the parallel implementation was executed on a system with a GeForce GTX 1080 Ti GPU connected to a 24-core Intel Xeon E5-2650 v4 processor. In this work, OpenFST OpenFST's composition operation can potentially create multiple transitions (that is, two or more transitions with the same source state, destination state, input label, and output label); a separate function (ArcSumMapper) must be applied to merge multiple transitions and sum their weights. Previous work also requires an additional step if identical edges need to be merged. Table we compare our implementation against Open-FST both with and without the reduction of transitions with an identical source,target,input, and output. We analyzed the time to compose all possible edges without performing any reductions (Algorithm 1, line 8). The second setup analyzes the time it takes to compute the composition and the arc summing of identical edges generated during the process. Table Table One comparison missing above is a comparison against a multicore processor. We attempted to compare against a parallel implementation using OpenMP on a single 16-core processor, but it did not yield any meaningful speedup, and even slowdowns of up to 10%. We think the reason for this is that because the BFS-like traversal of the FST makes it impractical to process states in parallel, the best strategy is to process and compose transitions in parallel. This very fine-grained parallelism does not seem suitable for OpenMP, as the overhead due to thread initialization and synchronization is higher than the time to execute the parallel sections of the code where the actual composition is calculated. According to our measurements, the average time to compose two transitions is 7.4 nanoseconds, while the average time to create an OpenMP thread is 10.2 nanoseconds. By contrast, the overhead for creating a CUDA thread seems to be around 0.4 nanoseconds. While a different parallelization strategy may exist for multicore architectures, at present, our finding is that GPUs, or other architectures with a low cost to create and destroy threads, are much more suitable for the fine grained operations used for the composition task. Table For future work, other potential bottlenecks could be addressed. The largest bottleneck is the queue used on the host to keep track of the edges to expand on the GPU. Using a similar data structure on the GPU to keep track of the states to expand would yield higher speedups. The only challenge of using such a data structure is the memory consumption on the GPU. If the two input transducers contain a large number of states and transitions, the amount of memory needed to track all the states and edges generated will grow significantly. Previous work For the reduction step, speedups can be achieved if the sort and reduce operations can be merged with the edge expansion part of the method. The challenge of merging identical edges during expansion is the auxiliary memory that will be required to store and index intermediate probabilities. It can be doable if the transducers used for the composition are small. In that case, the reduce operation might not yield significant speedups given the fact that the overhead to compose small transducers is too high when using a GPU architecture. This is the first work, to our knowledge, to deliver a parallel GPU implementation of the FST composition algorithm. We were able to obtain speedups of up to 4.5× over a serial OpenFST baseline and 6× over the serial implementation of our method. This parallel method considers several factors, such as host to device communication using page-locked memory, storage formats on the device, thread configuration, duplicate edge detection, and duplicate edge reduction. Our implementation is available as open-source software. | 660 | 650 | 660 |
Modeling Infant Word Segmentation | While many computational models have been created to explore how children might learn to segment words, the focus has largely been on achieving higher levels of performance and exploring cues suggested by artificial learning experiments. We propose a broader focus that includes designing models that display properties of infants' performance as they begin to segment words. We develop an efficient bootstrapping online learner with this focus in mind, and evaluate it on child-directed speech. In addition to attaining a high level of performance, this model predicts the error patterns seen in infants learning to segment words. | The last fifteen years have seen an increased interest in the problem of how infants learn to segment a continuous stream of speech into words. Much of this work has been inspired by experiments with infants focusing on what capabilities infants have and which cues they attend to. While experimental work provides insight into the types of cues infants may be using, computational modeling of the task provides a unique opportunity to test proposed cues on representative data and validate potential approaches to using them. While there are many potential approaches to the problem, a desirable solution to the problem should demonstrate acceptable performance in a simulation of the task, rely on cues in the input that an infant learner is able to detect at the relevant age, and exhibit learning patterns similar to those of in-fant learners. Most work in computational modeling of language acquisition has primarily focused on achieving acceptable performance using a single cue, transitional probabilities, but little effort has been made in that work to try to connect these learning solutions to the actual learning patterns observed in children outside of performance on short artificial language learning experiments. In this work we present a simple, easily extended algorithm for unsupervised word segmentation that, in addition to achieving a high level of performance in the task, correlates with the developmental patterns observed in infants. We discuss the connections between the design and behavior of our algorithm and the cognitive capabilities of infants at the age at which they appear to begin segmenting words. We also discuss how our technique can easily be extended to accept additional cues to word segmentation beyond those implemented in our learner. | As this paper examines the intersection of infants' capabilities and computational modeling, we discuss work in both domains, beginning with experimental approaches to understanding how infants may perform the task of word segmentation. A potential account of how infants learn to identify words in fluent speech is that they learn words in isolation and then use those words to segment longer utterances A more plausible alternative account to assume children attend to patterns in the input, using them to identify likely word units. Much experimental work has followed from the finding that in artificial learning tasks, infants and adults appear to prefer wordlike units that match statistical patterns in the input More recent work using real language data has not shown transitional probabilities to be as useful a cue as originally suggested. While experimental work has posited simple algorithms that infants might use to accomplish the task of word segmentation, when applied to real language data these techniques have yielded very poor results Optimization-based strategies have focused on techniques that a learner might use to arrive at an optimal segmentation, either through a dynamic programming approach In contrast, bootstrapping approaches While bootstrapping approaches have generally made stronger attempts to align with infants abilities to process the speech signal We will now discuss the patterns of development for children learning to segment English words, which form the motivation for the design of our segmenter. While the developmental patterns of Englishlearning infants have been broadly studied, it has been difficult to identify errors that must be caused by failures to correctly segment words and not other cognitive limitations, issues of morphological productivity, or syntactic competency issues. In addition to the undersegmentations that Brown identifies, Infants appear to use the ends of utterances to aid segmentation, and as early at 7.5 months old they are able to recognize novel words in fluent speech if the novel words are presented at the ends of an utterance and not utterance medially Most crucially, the syllable seems to be the unit children use to form words. Experiments that have been performed to gauge adult and infant competency in word segmentation have been designed with the assumption that the only possible segmentation points are at syllable boundaries. That infants should be able to operate on syllables is unsurprising; infants as young as 4-days-old are able to discriminate words based on syllable length From this survey, we see some relevant phenomena that a good model of infant word segmentation should replicate. ( The algorithm we propose is similar in style to previous online bootstrapping segmenters The learner we propose will primarily use items in its lexicon to help identify new possible words. The structure of the lexicon is as follows: Lexicon. The lexicon contains the phonological material of each word that the learner has previously hypothesized. The lexicon stores a score along with each word, which the segmenter may increment or decrement. The score assigned to each entry in the lexicon represents the relative confidence that it is a true word of the language. Each increment simply adds to the score of an individual word and each decrement subtracts from it. Subtractive segmentation is the process of using known words to segment the speech signal, which infants appear to be able to do as young as at six months of age Subtractive Segmentation. When possible, remove a known word in the lexicon from the front of the utterance being segmented. One way to apply subtractive segmentation is a greedy score-based heuristic for subtractive segmentation Figure Initially, utterances are treated as words in isolation. When the lexicon is empty, no word boundaries will be inserted and the full contents of each utterance will be added to the lexicon as a word. High-frequency words are preferred. When presented with a choice of multiple words to subtract, the highest scored word will be subtracted, which will prefer higher frequency words over lower frequency words in segmentation. Syllables between words are not necessarily considered words. Syllables that occur between subtractions are not added as words in the lexicon. For example, if play and please are in the lexicon but checkers is not, the utterance play checkers please will be correctly segmented, but checkers will not be added to the lexicon. Much like infants appear to do, the learner does not place as much weight on less reliable boundaries hypothesized in the middle of an utterance A particularly useful constraint for defining a word, introduced to the problem of word segmentation by Unique Stress Constraint (USC): A word can bear at most one primary stress. Before taking advantage of word-level stress information, the infant learner would need to identify the acoustic correlates to word-level stress in her language, and we will not address the specific mechanisms that an infant learner may use to accomplish the task of identifying word-level stress in this paper. Based on strong experimental evidence that infants discriminate between weakly and strongly stressed syllables and use it to group syllables into word-like units We adopt the USC for segmentation in the following fashion: Unique Stress Segmentation (USS). Insert word boundaries such that no word contains two strong stresses. Do so in a lazy fashion, inserting boundaries as a last resort just before adding another syllable to the current would cause it to contain two strong stresses. u ← the syllables of the utterance, initially with no word boundaries i ← 0 while i < len(u) do if u starts with one or more words in the lexicon then Choose the highest scoring word w and remove it from the front of u by inserting a word boundary before and after it. Increment the score of w Advance i to the last word boundary inserted else Advance i by one syllable end if end while Add the syllables between the last boundary inserted (or the beginning of the utterance if no boundaries were inserted) and the end of the utterance as a word in the lexicon with a score of 1 This strategy is expressed in an algorithmic form in Figure A USS-based algorithm would note the stress on the first syllable, then keep scanning until another stress is located on the fourth syllable, inserting a break between the two. Givemethe and ball would be added to the lexicon. While this is not a perfect segmentation, it can be used to aid subtractive segmentation by seeding the lexicon, even if not all entries added to the lexicon are not correct. Given our bootstrapping methodology, it is highly desirable to be able to integrate USS along with subtractive segmentation. An algorithm that combines both is shown in Figure The greedy segmentation proposed is limited in its ability to find a good segmentation by its reliance on local decisions. A frequent undersegmentation error of the greedy segmenter is of this type: partof an apple. Because partof has a higher score than part at the point in learning where this utterance is encountered, the greedy segmenter will always choose partof. An alternative approach is to let the segmenter explore multiple hypotheses at once, using a simple beam search. New hypotheses are added to support multiple possible subtractive segmentations. For example, using the utterance above, at the beginning of segmentation either part or partof could be subtracted from the utterance, and both possible segmentations can be evaluated. The learner scores these hypotheses in a fashion similar to the greedy segmentation, but using a function based on the score of all words used in the utterance. The geometric mean has been used in compound splitting For any w not found in the lexicon we must assign a score; we assign it a score of one as that would be its value assuming it had just been added to the lexicon, an approach similar to Laplace smoothing. Returning to the previous example, while the score of partof is greater than that of part, the score of of is much higher than either, so if both partof an apple and part of an apple are considered, the high score of of causes the latter to be chosen. When beam search is employed, only words used in the winning hypothesis are rewarded, similar to the greedy case where there are no other hypotheses. In addition to preferring segmentations that use words of higher score, it is useful to reduce the score of words that led to the consideration of a losing hypothesis. In the previous example we may want to penalize partof so that we are less likely to choose a future segmentation that includes it. Setting the beam size to be two, forcing each hypothesis to develop greedily after an ambiguous subtraction causes two hypotheses to form, we are guaranteed a unique word to penalize. In the previous example partof causes the split between the two hypotheses in the beam, and thus the learner penalizes it to discourage using it in the future. To evaluate the performance of our model, we measured performance on child-directed speech, using the same corpus used in a number of previous studies that used syllabified input The corpus was syllabified using onset maximization. Any utterance in which a word could not be transcribed using CMUDICT was excluded, leaving 55,840 utterances. We applied a probabilistic recall function to the lexicon to simulate the fact that a child learner will not perfectly recall all hypothesized words either due to memory limitations, variability in the input, or any other possible source of failure. We used the same function and constant as used by To adjust the word-level stress information to better reflect natural speech, the stress information obtained from CMUdict was post-processed in the context of each utterance using the technique of Lignos and In addition to variations of our algorithm, we evaluated a baseline segmenter which marks every syllable boundary as a word boundary, treating each syllable as a word. We tested five variants of our algorithm, adding combinations of USS, subtractive segmentation, and adding beam search with a beam size of two Precision and recall metrics were calculated over all word boundaries over all utterances in the corpus. The segmenter's task is effectively to classify each syllable boundary as a word boundary or not. As single-syllable utterances are unambiguously a single word with no possible boundaries, they are excluded from evaluation but still given as input. Evaluation was performed by giving each algorithm a single pass over the data set, with the performance on every utterance included in the total score. This is the most challenging metric for an online segmenter, as early mistakes made when the learner has been exposed to no data still count against it. The performance of several variations of our algorithm is given in Table Subtractive Segmentation provides an improvement in utterance evaluation over the Syllable Baseline, and adding beam search to it slightly improves F-score, sacrificing precision for recall. This is to be expected from the penalization step in beam search; as the penalization penalizes some good words in addition to undesirable ones, the purification of the utterance segmentation and the lexicon comes at the cost of recall from over-penalization. While USS alone is clearly not a sufficiently rich segmentation technique, it is important to note that it is a high precision indicator of word boundaries, suggesting that stress information can be useful to the learner even when used in this simple way. More importantly, USS contributes unique information to subtractive segmentation, as the utterance F-score of subtractive segmentation improves from 90.37 to 92.88. While the performance numbers show that the segmenter performs competently at the task, the more significant question at hand is whether the errors committed by the learner match developmental patterns of infants. As the design of the segmenter predicts, the main error types of the Subtractive Seg-mentation + USS algorithm fall into two classes: Function word collocations. For example, the third highest-scored non-word in the lexicon is that'sa, congruent with observations of function word collocations seen in children Oversegmentation of function words. The greedy approach used for segmenting the words of highest score results in function words being aggressively segmented off the front of words, for example a nother. The highest scored non-word in the lexicon is nother as a result. Adding beam search reduces the number of function word collocations in the segmenter's output; the learner's most commonly penalized lexicon entry is isthat. However, beam search also penalizes a lot of words, such as another. Thus the strategy used in beam search predicts an early use of function word collocations, followed by later oversegmentation. In the discussion of related work, we identified two major paradigms in modeling word segmentation: optimization and bootstrapping approaches. The algorithm presented here combines elements of both. Its behavior over time and across utterances is that of a bootstrapping learner, but when processing each utterance it selects a segmentation based on a simple, cognitively plausible beam search. By using a beam search of the kind suggested, it is easy to see how a variety of other cues could be integrated into the learning process. We have given a simple function for selecting the best hypothesis that only relies on lexicon scores, but more sophisticated functions could take multiple cues into account. For example it has been observed that 7-month-olds attend more to distributional cues while 9-month-olds attend more to stress cues A crucial frontier in word segmentation is the expansion of evaluation to include other languages. As with many other tasks, creating solutions that perform well in a broad variety of languages is important but has not yet been pursued. Future work should attempt to match developmental patterns in other languages, which will require adding morphological complexity to the system; the techniques developed for English are unlikely to succeed unchanged in other languages. Comparing with other algorithms' published results is difficult because of varying choices of data sets and metrics. For example, other syllable-based algorithms have evaluated their performance using word-level, as opposed to boundary-level, precision and recall The work presented here represents a step toward bringing together developmental knowledge regarding word segmentation and computational modeling. Rather than focusing on cues in artificial learning experiments which may or may not generalize to the natural development of word segmentation in children, we have shown how a simple algorithm for segmentation mimics many of the patterns seen in infants' developing competence. We believe this work opens the door to a promising line of research that will make a stronger effort to see simulations of language acquisition as not just an unsupervised learning task but rather a modeling task that must take into account a broad variety of phenomena. | 631 | 1,781 | 631 |
SPT: Learning to Selectively Insert Prompts for Better Prompt Tuning | Prompt tuning prepends a soft prompt to the input embeddings or hidden states and only optimizes the prompt to adapt pretrained models (PTMs) to downstream tasks. The previous work manually selects prompt layers which are far from optimal and failed to exploit the potential of prompt tuning. In this work, we propose a novel framework, Selective Prompt Tuning (SPT), that learns to select the proper prompt layers by inserting a prompt controlled by a learnable probabilistic gate at each intermediate layer. We further propose a novel bi-level optimization framework, SPT-DARTS, that can better optimize the learnable gates and improve the final prompt tuning performances of the learned prompt layer settings. We conduct extensive experiments with ten benchmark datasets under the full-data and few-shot scenarios. The results demonstrate that our SPT framework can perform better than the previous state-of-the-art PETuning baselines with comparable or fewer tunable parameters. | Increasingly large pre-trained models (PTMs) Prompt tuning IDPG In this paper, we first conduct a pilot experiment to show that simple modifications to the prompt inserting strategies in Our SPT framework considers a simple search space of whether to insert the generated instanceaware prompts into an intermediate layer of the PTM. As depicted in Figure Extensive experiments are conducted on six benchmark datasets from the GLUE benchmark and four widely studied text classification benchmarks. The results show that SPT performs comparable to or outperforms the previous SOTA PETuning methods. Especially in the few-shot scenario with 100 training samples, SPT outperforms the PETuning baselines by a clear margin. Figure To summarize, our contributions are: • We propose the SPT framework, which automatically learns to insert instance-aware prompts at the proper intermediate layers of PTMs. • We propose SPT-DARTS, which contains two novel techniques to improve the optimization process of the prompt hyper-network. • We verify our SPT framework in the full-data and few-shot scenarios across ten benchmark text classification tasks and three different PTM backbones. 2 Related work | A major research line of PETuning is the promptbased tuning that inserts some additional soft prompts into the embeddings or hidden states on specific layers of PTMs. Prompt tuning One important research line of PETuning is the adapter-based tuning Recently, there are work conducting automatic configurations of PETuning modules, such as For PTM full fine-tuning, the input samples are usually reformulated as [CLS] ⟨S 1 ⟩ [SEP] if the inputs are single sentences, and as if the inputs are sentence pairs. After the PTM backbone encodes the inputs, the final hidden states of the [CLS] token will be used to predict classification labels with a linear classification head. In the settings of prompt tuning, the downstream tasks are reformulated as masked language model tasks to close the gap between pre-training and finetuning. Specifically, we insert randomly initialized soft prompt p on the word embeddings, and also modify the original inputs using different manually designed templates with a [MASK] token for task adaptations. For example, in single-sentence tasks, the input will be transformed into a template like where E(x) means to map the tokens in the input sequence x into embedding vectors. Then, we map the original labels Y to some words (label words) in the vocabulary V of M. Then the final hidden states of [MASK] token will be fed into the pretrained masked language modeling (MLM) head to predict label words. During downstream task tuning, the PTM backbone and the MLM head will be frozen, and only the soft prompt p will be tuned. This way, the downstream tasks are formulated as a masked language modeling task to close the gap between pre-training and downstream task tuning. In the setting of our proposed SPT framework (depicted in Figure In this section, we will elaborate on our Selective Prompt Tuning (SPT) framework, which is depicted in Figure We have conducted a pilot experiment on the RTE A prompt generator is a simple feed-forward layer with a bottleneck architecture Following We aim to search for the optimal setting of prompt layers under the limited tunable parameter budgets. Assume the parameter budget allows K prompt layers. Since not all prompt layers contribute equally to task performance, only a fraction of layers should be selected as prompt layers to avoid redundancy of the tunable parameters. Thus, we initialize a prompt hyper-network where the embedding layer and all the intermediate layers have a prompt generation layer controlled by a learnable probabilistic gate. Introducing a zero-initialized learnable parameter α i ∈ R, the learnable gate at layer i is given by where Sigmoid() is the Sigmoid activation function. a i ∈ (0, 1) can be seen as the probability of activating the prompt generator at layer i. At each layer of the hyper-network, the prompt p i consists of the prompt p (prev) i propagated from the previous layer, and the prompt p (new) i generated from the prompt generator PG i at layer i. Formally, the prompt p i at layer i is given by where τ ∈ {0.5, 1.0} is a hyper-parameter determining whether to discard the previous layer's prompt p (prev) i when a new prompt is generated at layer i. Note that τ = 1.0 is similar to Through optimization, the probabilistic gate a i 's value will move toward 0 or 1, acting as importance scores for the prompt layers. The top K layers that receive the highest probabilistic gate values will be set as prompt layers to meet the parameter budget, and the model with such a group of prompt layers will be referred to as the learned SPT model. Our hyper-network, which is the backbone model with soft prompts at each layer and the prompts are controlled by the learnable gates α i . The parameters α i are learnt jointly with the model parameters. So they are not hyper-parameters and do not require hyper-parameter tuning to determine their values. Following DARTS where L() is the objective function on a given downstream task. The above bi-level optimization problem is approximated with an alternating optimization strategy. The gradients of the prompt generators are calculated with batches of samples from D ω , and the gradients of α are calculated on D α . Although DARTS is widely applied, it is known to produce unstable gradients and sub-optimal performances where GD() detaches the parameter from the computational graph, and the parameter will never have gradients. The above equation does not change the value of a i since C has a value of 1. And equation 3 becomes Now the gradient of α i is given by: We can see that our re-parameterization technique introduces an extra term k a k j a j ∂L ∂â k in the gradient. This way, we explicitly introduce the interactions among the gating parameters from different layers during gradient computations. Architectural consistency learning Note that the final optimized model we want is sparse, with most layers' prompt generators being pruned. To close the gap between the hyper-network and the final model, we assign a Bernoulli distributed random mask m i ∈ {0, 1} with mean value s ∈ (0, 1) to each learnable probabilistic gate a i . Thus, equation 6 becomes Now we ask the same input x to go through the forward pass twice, once with the architectural masks applied (Equation (1) x and h (2) x for the input sample. We now introduce a consistency regularization objective in addition to the task's objective function: where MSE is the mean squared error loss function. Note that this regularization term will be added to both the inner and outer objectives in Equation We evaluate our method on five single-sentence and five sentence-pair classification tasks, including six tasks from GLUE benchmark All experiments are conducted on NVIDIA GTX A40 GPUs. We use Pytorch We compare our SPT framework with the current SOTA baseline methods. Fine-tune The traditional fine-tuning method that trains all parameters in the PTM backbone. Adapter-based tuning we compare with (1) Adapter We first evaluate our SPT framework in the fewshot scenario. Following The results for the few-shot scenario of 100 samples are presented in Table From Table The overall comparison of our SPT framework and the baselines in the full-data scenario is reported in Table Visualization and discussions of the learned SPT models We visualize the learned SPT models on the ten tasks with RoBERTa-large backbone via heat map, as depicted in Figure The effects of prompt length In the main experiments (Table Transferability of the learned PG settings We now evaluate the transferability of the learned SPT models. In table 3, we select four datasets, SST-2, MNLI, RTE, and Subj, and treat them as the source or target dataset. We search the prompt layer setting on the source dataset and train with the learned prompt layer on the target task. We can see from Table 3 that the transferred prompt layer settings can perform close to the directly learned settings and already achieve better performances than most of the baseline models. The transferability guarantees the re-usability of our SPT framework. Working with other pre-trained encoders To demosntrate that our method's superiority does not rely on a specific pre-trained backbone, we run our SPT method and baselines on the DeBERTa-large Training efficiency of the SPT framework Compared with LPT Inference efficiency We run inference on the RTE test set, with three different tasks: prompt tuning, LPT, and the learned SPT model, with batch size 32 and maximum length 128. The memory footprint and speed are recorded in Table To demonstrate that our method can generalize to larger language models, we now conduct the The results are presented in Table Other English tasks We now also conduct experiments on other English tasks of different types: (a) COPA, a task focused on commonsense reasoning. (b) ShaRE-13, a nested named entity recognition (NER) task within the biomedical domain. (c) MultiArith, a task centered around arithmetic reasoning. To be consistent with Table In order to validate the effectiveness of our SPT-DARTS method, we now conduct two experiments. Ablation on the hyper-network optimization method The first experiment is to substitute SPT-DARTS to DARTS SPT-DARTS on the NAS benchmark Note that the architectural consistency learning regularization of our SPT-DARTS method is generally applicable to neural architecture search. We now evaluate SPT-DARTS on the widely used NAS benchmark, NAS-benchmark-201 In this work, we propose the SPT framework to automatically determine the optimal settings for prompt layers under the given PTM backbone and downstream task. We initialize a prompt hypernetwork in which each layer has a prompt generator controlled by a learnable probabilistic gate. To better optimize the prompt hyper-network, we propose a novel SPT-DARTS method containing two novel modifications to the original DARTS' bi-level optimization process. Experiment results in full-data and few-shot scenarios demonstrate that SPT can achieve comparable or better performance than state-of-the-art PETuning methods while maintaining parameter and inference efficiency. Prompt tuning Now we conduct pilot experiments on the RTE • Model M 0 : following LPT • Model M 2,k : starting from layer 1, we set a prompt layer for every k layers (k = 1, 2, 3, 4). The bottleneck dimension r of the prompt generators are set to 6, 12, 18, and 24, respectively. Note that in the above models, if a prompt layer has a prompt propagated from the previous layers, we will average the newly generated prompt with the old prompt and insert the resulting prompt into the prompt layer. We can see that the above models are simple variants of LPT Dataset splits For SST-2, MNLI, MRPC, QNLI, QQP Settings for the baselines We also add manual templates in Tables 9 and 10 to transfer the downstream tasks to (masked) language modeling tasks. For adapter-based tuning methods, we set the down-projection size m to 16. We set the soft prompt length to 20 for prompt tuning In the main content, we present the results for the few-shot scenario of 100 samples. Here, we present the results for the full-data scenario. The results are presented in Table In the main content, we present the results for the few-shot scenario of 100 samples. Here, we present the results for the few-shot scenario of 200 samples and 500 samples. Under a given random seed, we randomly sample the training samples from the original training set. We will run the experiments over 10 different random seeds and report the mean and deviation on each task. The pretrained backbone model is RoBERTa-large. The results are presented in Table In this section, we measure the memory consumption and inference speed for three methods: prompt tuning, LPT and SPT. The pre-trained backbone is RoBERTa-large, the batch size is set to 32 and the maximum sequence length is 128. We report the measures in Table We now present the experimental results for changing the prompt length in Table NAS-Bench-201 | 982 | 1,188 | 982 |
ADVISER: A Dialog System Framework for Education & Research | In this paper, we present ADVISER 1 -an open source dialog system framework for education and research purposes. This system supports multi-domain task-oriented conversations in two languages. It additionally provides a flexible architecture in which modules can be arbitrarily combined or exchanged -allowing for easy switching between rules-based and neural network based implementations. Furthermore, ADVISER offers a transparent, user-friendly framework designed for interdisciplinary collaboration: from a flexible back end, allowing easy integration of new features, to an intuitive graphical user interface supporting nontechnical users. | Dialog systems can be open-ended, e.g. small talk systems During the last years, several toolkits To address these shortcomings, we propose a multilingual multi-domain dialog system with two parallel goals: 1) to provide a highly flexible research framework not only for technique oriented developers but also for non-technical oriented developers such as linguists and 2) to provide an interdisciplinary educational tool. | During the last decade, several toolkits have been developed to facilitate the rapid implementation of goal-oriented dialog systems. RavenClaw Our approach is inline with InproTK OpenDial PyDial Our goal with this system is to provide both a highly modular research platform and an interdisciplinary educational tool. Use Cases To accomplish this, we address the needs of the following three user groups: 1) technical users such as machine learning researchers 2) non-technical researchers such as linguists and 3) multidisciplinary students. The main objectives of the framework are threefold: 1) to maximise the ease for technical developers when exploring new architectures or extending system functionality with new techniques, e.g. machine learning. 2) To minimise the workload on code bases for nontechnical users (e.g. linguists) allowing them to focus on their main goals, e.g. exploring the dialog flow for new domains, or languages or investigating human language variations when interacting with a dialog system. 3) To provide an engaging way for multidisciplinary students to learn how dialog systems work. From this, our framework is designed to optimise the following criteria: Modularity: For each module in a classic dialog system pipeline (NLU, BST, dialog policy and NLG), we provide a handcrafted baseline module, additionally we provide a machine learning based implementation for the BST and policy (see section 4.2). These can be used to quickly assemble a working dialog system or as implementation guidelines for custom modules. Additionally, because all modules inherit from the same abstract class, technical users can also easily write their own implementations or combinations of modules. Flexibility: In contrast to a more static dialog system pipeline, we propose a graph structure where the user is in control of the modules and their order. This level of control allows users to realise anything from pipelines to end-to-end systems. Even branching scenarios are possible as demonstrated by our meta policy which combines multiple parallel subgraphs into a single dialog. Transparency: Inputs to and outputs from each module are captured by automatically generated XML interface descriptions, providing a transparent view of data flow through the dialog system. User-friendly at different levels: technical users have the full flexibility to explore and extend the back end; non-technical users can use our defined modules for building systems; students from different disciplines could easily learn the concepts and explore human machine interaction. Graphical user interfaces (GUIs) allow users to access a system in an easy, clear and appealing fashion. Thus, in addition to a console, ADVISER provides two separate graphical interfaces: a GUI to chat with the dialog system and a gamelike interface for study purposes. Chat interface Our GUI is implemented as a module, which is called by the dialog system once at the beginning and once at the end of each turn. In the first turn, the GUI is initialised and loaded. At the beginning of each turn, it blocks the processing pipeline until the user has entered a message. The message is displayed inside the GUI and then passed to succeeding modules, e.g. the NLU module. At the end of the turn, the module takes the output of the NLG and displays it inside the GUI. Gamelike interface for study purposes Handcrafted NLU modules are often based on regular expressions (regexes), which aim to find patterns inside a user utterance in order to identify possible user acts. The module's developers try to cover as many user act realisations (UARs) as possible. However, due to the versatile nature of human language, many regexes are needed to yield a high coverage. In ADVISER, we provide an interface which supports collaboration between computer scientists and linguists to yield a higher quality of the NLU module. To motivate both sides, we frame this challenge as a game -the CrossTick game -in which computer scientists try to achieve high regex coverage and linguists try to write uncovered UARs. First, the user has to select the domain for which UARs are written and the NLU module that should be evaluated. After a UAR is created, it is analysed by the specified module and the user is informed via a tick ( ) or a cross (×) whether the user acts were detected correctly. The user can save and load files in JSON format. Input Up to now, only text is supported but our tool could be easily extendable to other modalities such as speech and vision. Currently text can either be entered through the console or our GUI. NLU We implemented a domain-independent rules-based NLU that loads regexes from a JSON file. The regexes are split into three categoriesgeneral acts (e.g. Hello, RequestAlternatives and Affirm), domain-specific inform acts and domainspecific request acts. We supported both, English and German rules. The NLU module receives the user input as string and checks it across all regexes, creating a list of possible user acts. If no act is found, then it is assumed that the NLU was not capable of understanding and the user act is interpreted as a BadAct. We additionally resolve some ambiguities using the belief state, i.e. the dialog history. If a non-contextualised Affirm or Deny act is found, the system attempts to use the dialog history to contextualise it. BST The belief state tracker maintains a representation of the current dialog state. The rulesbased BST receives a list of user acts from the NLU that are decoded and stored with probabilities in the belief state. The BST also detects the presence of discourse acts, e.g. Hello, Repeat, Inform and Request. Moreover, it stores information from the system history including the last requested slot and last entity offered. Our machine learning based belief state tracker is trained to predict the belief state directly from text without the need for an NLU. To track the constraints and requests issued by the user, we feed system actions and user input turn-wise into a recurrent network and concatenate the resulting hidden states of both inputs before predicting the final belief state Policy The rules-based policy aims to provide users with a single entity matching the constraints they have specified. After each turn, the policy verifies that the user has not ended the dialog. It then reads the current belief state and generates a suitable query for the database. If there are multiple results, the next system act will request more information from the user to disambiguate. Otherwise, the system is able to make an offer -directly informing the user about a specific entity -or to give more details about a current offer. Our machine learning policy is trained using deep RL. Similar to the Deep Q-learning algorithm NLG In the natural language generation module, the semantic representation of the system act is transformed into natural language. In the handcrafted NLG module, each possible system act is mapped to exactly one utterance. To reduce the potentially large number of mappings, templates are used which allow multiple mappings from system acts to their respective utterance at once. By specifying placeholders for a system acts slots and/or values, the utterance can be formulated independent of the actual realisations (e.g. inform(name={X}, ects={Y}) → The course {X} is worth {Y} ECTS.). During the dialog, the system iterates through the templates and chooses the first one for which the system act fits the template's signature. For each domain we present here, we created both German and English templates. User Simulator To support automatic evaluation and enable RL, we implemented a user simulator to provide user actions at the intention level. For this purpose, we integrated the Agendabased Meta Policy In a multi-domain dialog system, intelligently switching between or combining individual domains is necessary to provide the user a unified experience. However, the best way to accomplish this remains unclear. In our system, we propose an architecture where all domains are allowed to run in parallel and the resulting output is processed by a Meta Policy. The meta policy is responsible for tracking which domains are active and, if necessary, combining their output. In the case where a user utterance cannot be directly handled in the context of a single domain, the meta policy is also responsible for rewriting it into one or more single-domain utterances. If this happens, rather than outputting something for that turn or asking for user input, the system steps through an additional turn, using the rewritten utterance as the new user input. In this way the meta policy is able to intelligently coordinate switching or combining domains, preserving as much information as possible to make as informed of a decision as possible. This architecture can be seen in figure In order to feedback on the quality, functionality, and usefulness of the ADVISER system, we conducted two experiments: we first investigated user experiences with a student support dialog system and second explored the effectiveness of using a game within a multidisciplinary practical course. Multilingual and multi-domain As a realworld use case, we implemented a dialog system, using our ADVISER framework, to help students navigate through the course and module selection at the Institute for Natural Language Processing (IMS) at the University of Stuttgart. This task consists of three domains for asking information about lecturers, for locating courses, and for collecting information about modules. Students can freely switch between or combine the domains in order to find the information they need. Additionally because of the students' backgrounds, our system supports NLU and NLG in both, English and German. We evaluated our dialog policies automatically on two domains: the new domain -IMS modules and the benchmark domain -Cambridge restaurants After each dialog, participants were asked to rate their chat with ADVISER, considering the quality of the respective dialog in terms of naturalness and coherence as well as how successful ADVISER was in processing the information provided by the user and how comfortable it was to use. Overall, participants rated the quality of dialog with the RL policy a 4.3 similar to the results obtained from dialogs using the rules-based policy (4.48 points on average) on a scale from 1 (very bad) to 7 (very good), confirming the functionality of the system. Considering the system's success in processing the user information correctly, on a scale from 1 (very bad) to 5 (very good), participants again found ADVISER to be slightly above average. This result applies to both dialogs which were generated using the rules-based policy (3.52 on average) and those using the RL policy (3.52 point average). 61.5 % of the participants reported that they would use ADVISER for their own pur-poses. Moreover, asking participants to rate the comfort of using the system on a scale from 1 (very bad) to 5 (very good), an average comfort level was 3.69, indicating that most users felt comfortable with the system. To test ADVISER as a tool for study purposes, we evaluated the CrossTick game, where linguistics and computer science students work to develop and gain a better understanding of the NLU in a dialog system. CrossTick game Given a user act, linguistics students/researchers work to find as many UARs as possible which are not recognised by the system. On the other side, computer science students/researchers minimise the system errors by developing either new rules or machine learning models to handle more natural language variations of the user inputs. Evaluation In order to test the efficiency and the user's opinion about the game, the same 13 participants from the previous evaluation were given 10 tasks. Per task, users were asked to take the linguist's perspective, and given intent, slots, and values to generate natural language for. In this case intent corresponds to the type of sentence (e.g. Inform/Request), slot to the type of information a user is giving/requesting and value describes the actual information given. If participants succeeded in creating natural language variations that were not covered by the system's NLU, they saw a cross mark (×) next to their input, and points were added to their score. Otherwise, they obtained a check mark ( ), worth 0 points. Although users could reach an infinite number of points per task, they were encouraged to be more productive and creative by telling them to beat the high score another user previously scored. During the survey, 84.6% of the participants stated that the game was effective for educational purposes. Further, on a scale from 1 (completely useless) to 5 (very useful) the CrossTick Game received on average 3.69 points, suggesting that most users learned the NLU module's functionality. Overall, participants enjoyed using the game. Some participants especially liked the challenge to beat the high score, while others enjoyed that they were rewarded with points for uncovered sentences. In this paper, we presented ADVISER -an open source dialog system which supports multilingual, multi-domain human-machine task-oriented conversations. It supports modules which can easily be interchanged between rules-based and machine learning implementations -including deep learning and RL. Our preliminary human study shows that with our toolkit one can easily build a useful dialog system. Furthermore, the CrossTick game offers an appealing interface for education purposes for different study disciplines. | 644 | 422 | 644 |
KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers | The goal of database question answering is to enable natural language querying of real-life relational databases in diverse application domains. Recently, large-scale datasets such as Spider and WikiSQL facilitated novel modeling techniques for text-to-SQL parsing, improving zero-shot generalization to unseen databases. In this work, we examine the challenges that still prevent these techniques from practical deployment. First, we present KaggleDBQA, a new cross-domain evaluation dataset of real Web databases, with domain-specific data types, original formatting, and unrestricted questions. Second, we re-examine the choice of evaluation tasks for text-to-SQL parsers as applied in real-life settings. Finally, we augment our in-domain evaluation task with database documentation, a naturally occurring source of implicit domain knowledge. We show that KaggleDBQA presents a challenge to state-ofthe-art zero-shot parsers but a more realistic evaluation setting and creative use of associated database documentation boosts their accuracy by over 13.2%, doubling their performance. | Text-to-SQL parsing is a form of database question answering (DBQA) that answers a user's natural-language (NL) question by converting it into a SQL query over a given relational database. It can facilitate NL-based interfaces for arbitrary enduser applications, thereby removing the need for domain-specific UX or learning query languages. As such, DBQA attracted significant attention in academia and industry, with development of supervised datasets The key challenge of text-to-SQL parsing is zeroshot generalization to unseen domains, i.e. to new database schemas and differently distributed NL questions. Large-scale annotated datasets like Spider Despite impressive progress in DBQA, deployment of SOTA parsers is still challenging. They often lack robustness necessary to deploy on reallife application domains. While many challenges underlie the gap between SOTA DBQA and its reallife deployment, we identify three specific discrepancies. First, Spider and WikiSQL datasets normalize and preprocess database schemas or rely on academic example databases that originate with humanreadable schemas Second, the NL questions of Spider and Wik-iSQL have high column mention percentage Finally, the standard evaluation setting of crossdomain text-to-SQL parsing assumes no in-domain Database: Student Math Score • To test question generalization, we collected unrestricted NL questions over the databases in KaggleDBQA. Importantly, the annotators were not presented with original column names, and given no task priming (Section 3.2). Out of 400 collected questions, one-third were out of scope for SOTA text-to-SQL parsers. The remaining 272 questions, while expressible, can only be solved to 13.56% accuracy (Section 4). • Finally, we augment KaggleDBQA with database documentation, common metadata for real-world databases and a rich source of implicit domain knowledge. Database documentation includes column and table descriptions, categorical value descriptions (known as data dictionaries), SQL examples, and more (Section 3.3). We present a technique to augment SOTA parsers with column and value descriptions, which significantly improves their out-of-domain accuracy (Section 4). Figure In addition to more realistic data and questions, we argue that evaluation of real-world text-to-SQL performance should assume few-shot access to ∼10 in-domain question-SQL examples rather than measuring zero-shot performance. In practical terms, few-shot evaluation assumes up to 1-2 hours of effort by a target database administrator or application developer, and translates to significant performance benefits. In a few-shot evaluation setting, augmenting a SOTA text-to-SQL parser (RAT-SQL by | Text-to-SQL Semantic Parsing Semantic parsing has been studied extensively for decades In this paper, we propose a few-shot evaluation to inspire future research of practical text-to-SQL parsers. Like zero-shot, fewshot has access to many out-of-domain examples, but it also has access to a small number of indomain examples as well. Few-shot learning has been applied to text classification in Recent work has begun to question whether existing datasets are constructed in a way that will lead to models that generalize well to new domains. Another direction of generalization being explored is compositionality. The goal of the KaggleDBQA evaluation dataset is to more closely reflect the data and questions a text-to-SQL parser might encounter in a real-world setting. As such, it expands upon contemporary cross-domain text-to-SQL datasets in three key aspects: (i) its databases are pulled from real-world data sources and not normalized; (ii) its questions are authored in environments that mimic natural question answering; (iii) its evaluation assumes the type of system augmentation and tuning that could be expected from domain experts that execute text-to-SQL parser deployment. We describe each of these components in turn in this section. We chose to obtain databases from Kaggle, a popular platform for hosting data science competitions and sharing datasets and code. Their hosted datasets are by definition "real" as they are used by members of the site for research. Competition hosts upload their data unnormalized, and the data content and formatting matches its domainspecific usage (see Figure For each database, we asked five annotators to write ten domain-specific questions that they think someone might be interested in and that can be answered using the database. We use five annotators per database to help guarantee diversity of questions. Each annotated two databases, for a total of 20 annotators and 400 questions. The annotators are not required to possess SQL knowledge so their questions are more reflective of natural user interests. Importantly, to discourage users from using the same terms from the database schema in their questions, we replace the original column names with the column descriptions. When annotating the questions, the annotators are shown a paragraph description of the database, table names, column descriptions and ten sampled rows for each table. We do not provide any constraints or templates other than asking them to avoid using exact phrases from the column headings in the questions. Appendix A.2.3 shows the full guidelines. Separately, each question is annotated with its SQL equivalent by independent SQL experts. They are given full access to all of the data content and database schema. One-third of the questions were yes/no, percentage, temporal, or unexpressible in SQL and were not considered in our evaluation of SOTA models (see Appendix A.2.2 for details), leaving 272 questions in total. Each database has associated plain-text documentation that can assist text-to-SQL parsing. It is commonly found as internal documentation for database administrators or external documentation accompanying a dataset release. The contents vary but often contain an overview of the database domain, descriptions of tables and columns, sample queries, original sources, and more. While all of these types of information could be leveraged to assist with domain transfer, in this work we focus on the column descriptions. They help address the schema linking problem of textto-SQL parsing, i.e. aligning entity references in the question with database columns We manually extract the column descriptions from the database documentation and provide the mapping from column to description as part of KaggleDBQA. The descriptions are free text and sometimes contain additional information such as defining the values in an categorical column. Such information could help with the value-linking problem (mapping a value in the question to the column that likely contains it). We leave the entire description as a single field and leave it to future work to explore these uses further. In addition to column descriptions, we also include the original unstructured documentation which can be used for future research on automatically extracting descriptions or leveraging other domain knowledge. The current cross-domain datasets Spider We postulate that it is more realistic to assume a setting where an application author spends 1-2 hours authoring examples and adapting existing database documentation. This time investment is a small fraction of the time required to prepare an application itself and so we believe application authors would devote the time if it resulted in increased text-to-SQL accuracy. In informal experiments, we have found SQL annotators can author 10-20 examples in an hour. Thus, the KaggleDBQA evaluation setting is few-shot: 30% of the questions for each domain (6-15 depending on the domain) are designated as in-domain and may be used as part of training for that domain, along with documentation. The remaining 70% are used for evaluation. We report accuracy in both the few-shot as well as the standard zero-shot (cross-domain) setting in this paper, but consider the few-shot setting to be the primary evaluation setting for KaggleDBQA. Evaluation is conducted on the same 70% portion regardless of setting, to ensure comparable results. We compare KaggleDBQA with previous benchmark datasets using key metrics in Table We also analyze the overlap between question terms and column descriptions in Table To measure the complexity of SQL in KaggleDBQA, we adopt the hardness criteria of Spider and report the numbers in Figure We first evaluate KaggleDBQA using models that were developed for the Spider dataset. EditSQL (Zhang et al., 2019): EditSQL (with BERT) is the highest-performing model on the Spider dataset that also provides an open-source implementation along with a downloadable trained model. + BERT) is the model with highest accuracy on the Spider leaderboard that also provides an opensource implementation. 4,5 It adds string matching to the encoder through the use of relation-aware self-attention and adopts a tree-based decoder to ensure the correctness of the generated SQL. Throughout this paper, we use the same exactmatch accuracy metric introduced by the Spider dataset. Although our primary evaluation setting is few-shot, we first examine the traditional zeroshot setting to present an unbiased comparison with previous results. Table For all further experiments on KaggleDBQA that emulate real-world evaluation, we choose RAT-SQL as the best performing parser. To apply RAT-SQL to KaggleDBQA's few-shot setting, for each domain we create a model by fine-tuning on its 30% in-domain data. See Appendix A.3 for implementation details. This fine- tuning is always performed as the last step before evaluation. As Table The database schemas in KaggleDBQA are obscure, making the task difficult without leveraging the database documentation. We consider only the column descriptions, but other portions of the documentation may prove useful in future work. The best approach for incorporating column descriptions into a text-to-SQL model is model-specific. RAT-SQL makes use of relations between question tokens and schema terms to assist with schemalinking. We extend the same functionality to column descriptions by appending the column descriptions to the column names (separated by a period) and recomputing matching relations. The concatenated column name is also presented to the transformer encoder for schema encoding. Simply adding these descriptions results in mismatch between the training set (Spider) which does not have descriptions, and the evaluation set (KaggleDBQA) which does. To alleviate it, we first augment the schemas in Spider with artificial descriptions. For column of table , the description for is "the of the ". We then retrain RAT-SQL on Spider with these artificial descriptions. Since the artificial descriptions simply restate information from the schema, the model may not learn to leverage them for any further information about schema linking and simply treat them as noise. Therefore, we also evaluate RAT-SQL adapted to the general domain of KaggleDBQA so that it (a) experiences useful descriptions and (b) adapts to the language distribution of KaggleDBQA. We evaluate the benefits of this adaptation using leaveone-out: for each domain in KaggleDBQA, we finetune the model on all other domains except for the target (with the same fine-tuning parameters as for few-shot learning). Adapting in this way is predictive of the performance of a novel domain with similar characteristics. As with the other few-shot results, the model is then fine-tuned on the few examples of target domain data. Adaptation and fine-tuning are two separate training processes. Adaptation is meant to adapt to the real-world distribution. Fine-tuning is meant to adjust for in-domain knowledge. The most effective setting for a target database in our experiments is to conduct adaptation first, followed by fine-tuning. Table One of the major challenges in KaggleDBQA is that column names are often obscure or abbreviated. A natural question is whether this creates difficulty because the model struggles to understand the meaning of a column or because it leads to a low overlap between question and column terms. In an attempt to tease these factors apart, we created a normalized version of KaggleDBQA by replacing the obscure column names with normalized column names such as one might find in the Spider dataset. This was done manually using column descriptions to help clarify each column and without introducing any extra knowledge into the column names except for the expansion of abbreviations (e.g. t_fed_rev → total federal revenue). In Table Normalization helps clarify the obscure column names of KaggleDBQA. However, the other chal- Dataset Collection The data collection process was pre-approved by IRB. Each annotator agreed to a consent form before having access to the labeling task. Each annotator was rewarded with a $20 e-gift card for the approximately one hour of their time. The authors of this paper acted as the SQL an-notators and incurred no additional compensation. The databases collected for KaggleDBQA were individually reviewed to ensure they were properly licensed for re-distribution. For other details of dataset construction, please refer to Section 3. Aside from email addresses, no personal information of annotators was collected during our study. Email addresses were not shared and were promptly deleted after compensation had been provided. The association between annotator and annotation was deleted before any analysis or distribution was conducted. Language Distribution KaggleDBQA only includes question annotations and databases in English, thus evaluating multi-lingual text-to-SQL models on it will require translation. The set of annotators included both native and second-language speakers of English, all fluent. KaggleDBQA is to encourage the development of DBQA that will work in real-world settings. The actual deployment of a text-to-SQL parser must be conducted with appropriate safeguards in place to ensure users understand that the answers may be incorrect, especially if those answers are to be used in decision making. We show the zero shot testing and out-of-domain adaptation results in Table For each user, we show two different HTML files that contain different instructions of the task, database overview, table name(s), column descriptions, ten sampled rows of the database content. Question annotators were allowed to write any type of question without restriction. While this represents a natural distribution of questions one might expect to encounter in a realistic setting, some types do not appear in the Spider training set and thus pose particular difficulty with current text-to-SQL systems. We remove these from the official evaluation but still include them in the dataset for future work on these types of questions. Table We also establish few guidelines and follow them throughout the annotation process: 1. If the referred column is categorical, use "=" operator with the value from the database (e.g., Where is the area with the largest number 2. Sometimes ID columns are paired with their name realizations (e.g., state_code and state). We choose to return ID whenever users do not explicitly ask for the name realizations. 3. Duplicate rows can sometimes yield an incorrect result. However, it is not possible for models to know in advance unless they encode database content. So we use the DISTINCT operator when necessary to return the correct answer or it is explicitly asked for by the user (e.g., What are titles for each unique entry?). For all our experiments we use the RAT-SQL official implementation and the pre-trained BERT-Large from Google. | 1,087 | 2,698 | 1,087 |
Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment | Bilingual lexicons map words in one language to their translations in another, and are typically induced by learning linear projections to align monolingual word embedding spaces. In this paper, we show it is possible to produce much higher quality lexicons with methods that combine (1) unsupervised bitext mining and (2) unsupervised word alignment. Directly applying a pipeline that uses recent algorithms for both subproblems significantly improves induced lexicon quality and further gains are possible by learning to filter the resulting lexical entries, with both unsupervised and semisupervised schemes. Our final model outperforms the state of the art on the BUCC 2020 shared task by 14 F 1 points averaged over 12 language pairs, while also providing a more interpretable approach that allows for rich reasoning of word meaning in context. Further analysis of our output and the standard reference lexicons suggests they are of comparable quality, and new benchmarks may be needed to measure further progress on this task. 1 | Bilingual lexicons map words in one language to their translations in another, and can be automatically induced by learning linear projections to align monolingual word embedding spaces We show that simply pipelining recent algorithms for unsupervised bitext mining These core contributions are established by systematic experiments in the class of bitext construction and alignment methods (Figure In addition to BLI, our method can also be directly adapted to improve word alignment and reach competitive or better alignment accuracy than the state of the art on all investigated language pairs. We find that improved alignment in sentence representations Our final BLI approach outperforms the previous state of the art on the BUCC 2020 shared task | Bilingual lexicon induction (BLI). The task of BLI aims to induce a bilingual lexicon (i.e., word translation) from comparable monolingual corpora (e.g., Wikipedia in different languages). Following Word alignment. Word alignment is a fundamental problem in statistical machine translation, of which the goal is to align words that are translations of each in within parallel sentences Bitext mining/parallel corpus mining. Bitext mining has been a long studied task We build on unsupervised methods for word alignment and bitext construction, as reviewed below. SimAlign Based on the similarity matrix, the argmax algorithm aligns the positions that are the simultaneous column-wise and row-wise maxima. To increase recall, We consider two methods for bitext construction: unsupervised machine translation (generation; Generation Retrieval (1) = cos (s, t) as the metric to induce lexicon, where mat(s, t) and coc(s, t) denote the one-to-one matching count (e.g., guten-good; Figure We also propose a weakly supervised method, which assumes access to a seed lexicon. This lexicon is used to train a classifier to further filter the potential lexical entries. For a pair of word type s, t , our classifier uses the following global features: • Count of alignment: we consider both one-toone alignment (Section 4.1) and many-to-one alignment (e.g., danke-you and danke-thank; Figure • Count of co-occurrence used in Section 4.1. • The count of s in the source language and t in the target language. 6 • Non-contextualized word similarity: we feed the word type itself into CRISS, use the average pooling of the output subword embeddings, and consider both cosine similarity and dot-product similarity as features. For a counting feature c, we take log (c + θ c ), where θ consists of learnable parameters. There are 7 features in total, which is denoted by x s,t ∈ R 7 . We compute the probability of a pair of words s, t being in the induced lexicon P Θ (s, t) 7 by a ReLU activated multi-layer perceptron (MLP): where σ(•) denotes the sigmoid function, and Recall that we are able to access a seed lexicon, which consists of pairs of word translations. In the training stage, we seek to maximize the log likelihood: where D + and D -denotes the positive training set (i.e., the seed lexicon) and the negative training set respectively. We construct the negative training set by extracting all bilingual word pairs that cooccurred but are not in the seed word pairs. We tune two hyperparameters δ and n to maximize the F 1 score on the seed lexicon and use them for inference, where δ denotes the prediction threshold and n denotes the maximum number of translations for each source word, following The idea of using an MLP to induce lexicon with weak supervision (Section 4.2) can be directly extended to word alignment. Let B = { S i , T i } N i=1 6 SimAlign sometimes mistakenly align rare words to punctuation, and such features can help exclude such pairs. 7 Not to be confused with joint probability. Algorithm 1: Inference algorithm for weakly-supervised lexicon induction. Input: Thresholds δ, n, Model parameters Θ, source words S Output: Induced lexicon L L ← ∅ for s ∈ S do ( s, t 1 , . . . , s, t k ) ← bilingual word pairs sorted by the descending order of denote the constructed bitext in Section 3.2, where N denotes the number of sentence pairs, and S i and T i denote a pair of sentences in the source and target language respectively. In a pair of bitext S, T , S = s 1 , . . . , s s and T = t 1 , . . . , t s denote sentences consist of word tokens s i or t i . For a pair of bitext, SimAlign with a specified inference algorithm produces word alignment A = { a i , b i } i , denoting that the word tokens s a i and t b i are aligned. We substitute the non-contextualized word similarity feature (Section 4.2) with contextualized word similarity where the corresponding word embedding is computed by averaging the final-layer contextualized subword embeddings of CRISS. The cosine similarities and dot-products of these embeddings are included as features. Instead of the binary classification in Section 4.2, we do ternary classification for word alignments. For a pair of word tokens s i , t j , the gold label y s i ,t j is defined as Intuitively, the labels 0 and 2 represents confident alignment or non-alignment by both methods, while the label 1 models the potential alignment. The MLP takes the features x s i ,t j ∈ R 7 of the word token pair, and compute the probability of each label y by ĥ On the training stage, we maximize the log-likelihood of groundtruth labels: On the inference stage, we keep all word token pairs s i , t j that have as the prediction. Throughout our experiments, we use a two-layer perceptron with the hidden size of 8 for both lexicon induction and word alignment. We optimize all of our models using Adam (Kingma and Ba, 2015) with the initial learning rate 5 × 10 -4 . For our bitext construction methods, we retrieve the best matching sentence or translate the sentences in the source language Wikipedia; for baseline models, we use their default settings. For evaluation, we use the BUCC 2020 BLI shared task dataset We compare the following baselines: BUCC. Best results from the BUCC 2020 VECMAP. Popular and robust method for aligning monolingual word embeddings via a linear projection and extracting lexicons. Here, we use the standard implementation WM. WikiMatrix We evaluate bidirectional translations from beam search (GEN; Section 3.2), bidirectional translations from nucleus sampling (GEN-N; Holtzman et al., 2020), Our main results are presented in Table Bitext quality. Since RTV achieves surprisingly high performance, we are interested in how much the quality of bitext affects the lexicon induction performance. We divide all retrieved bitexts with score (Eq. 1) larger than 1 equally into five sections with respect to the score, and compare the lexicon Table induction performance (Table In general, the lexicon induction performance of RTV correlates well with the quality of bitext. Even using the bitext of the lowest quality (RTV-5), it is still able to induce reasonably good bilingual lexicon, outperforming the best numbers reported by BUCC 2020 participants (Table Word alignment quality. We compare the lexicon induction performance using the same set of constructed bitext (RTV) and different word aligners (Table Bitext quantity. We investigate how the BLI performance changes when the quantity of bitext changes (Figure Dependence on word frequency of GEN vs. RTV. We observe that retrieval-based bitext construction (RTV) works significantly better than generationbased ones On average and for the majority of language pairs, both methods do better on low-frequency source words than high-frequency ones (Figure VECMAP. While BLI through bitext construction and word alignment clearly achieves superior performance than that through vector rotation (Table 1), we further show that the gap is larger on low-frequency words (Figure Following the advice of We evaluate different word alignment methods (Table de-en en-fr en-hi ro-en following We present a direct and effective framework for BLI with unsupervised bitext mining and word alignment, which sets a new state of the art on the task. From the perspective of pretrained multilingual models umich.edu/ ˜mihalcea/wpt (en-fr and ro-en); A Language-Specific Analysis While Figure We show examples of mined bitext with different quality (Table Precision@1 (P@1) is a widely applied metric to evaluate bilingual lexicon induction To understand the remaining errors, we randomly sampled 400 word pairs from the induced lexicon and compare them to ground truth as and Google Translate via =googletranslate(A1, "zh", "en"). All error cases are included in Table 10. In overall precision, our induced lexicon is comparable to the output of Google translate API where there are 17 errors for GEN-RTV 14 errors for Google and 4 common errors. Many natural problems are actually promise problems. RTV-1 Cold climates may present special challenges. 很顯然,曾經在某個場合達成了其所不知道的某種協議。I thought they'd come to some kind of an agreement. The plotline is somewhat different from the first series. He also made sketches and paintings. zh-en 此節目被批評為宣揚偽科學和野史。 The book was criticized for misrepresenting nutritional science. Eulogies were given by the Rev. de-en Gespielt wird meistens Mitte Juni. It is played principally on weekends. Schuppiger Schlangenstern Plains garter snake Das Artwork stammt von Dave Field. The artwork is by Mike Egan. Ammonolyse ist eine der Hydrolyse analoge Reaktion, Hydroxylation is an oxidative process. Die Pellenz gliedert sich wie folgt: The Pellenz is divided as follows: de-en Auch Nicolau war praktizierender Katholik. Cassar was a practicing Roman Catholic. Im Jahr 2018 lag die Mitgliederzahl bei 350. The membership in 2017 numbered around 1,000. Er trägt die Fahrgestellnummer TNT 102. It carries the registration number AWK 230. Als Moderator war Benjamin Jaworskyj angereist. Dmitry Nagiev appeared as the presenter. Benachbarte Naturräume und Landschaften sind: Neighboring hydrographic watersheds are: | 1,034 | 751 | 1,034 |
Near-Negative Distinction: Giving a Second Life to Human Evaluation Datasets | Precisely assessing the progress in natural language generation (NLG) tasks is challenging, and human evaluation to establish a preference in a model's output over another is often necessary. However, human evaluation is usually costly, difficult to reproduce, and non-reusable. In this paper, we propose a new and simple automatic evaluation method for NLG called Near-Negative Distinction (NND) that repurposes prior human annotations into NND tests. In an NND test, an NLG model must place a higher likelihood on a high-quality output candidate than on a near-negative candidate with a known error. Model performance is established by the number of NND tests a model passes, as well as the distribution over task-specific errors the model fails on. Through experiments on three NLG tasks (question generation, question answering, and summarization), we show that NND achieves a higher correlation with human judgments than standard NLG evaluation metrics. We then illustrate NND evaluation in four practical scenarios, for example performing fine-grain model analysis, or studying model training dynamics. Our findings suggest that NND can give a second life to human annotations and provide low-cost NLG evaluation. | Pre-training of large language models has fueled recent progress in many natural language generation (NLG) tasks such as summarization The gold standard for NLG evaluation is manual expert annotation: it can be highly precise and fully customized to an NLG task, helping identify model limitations, and setting the direction of future work. The main limitation of manual expert annotation is the complexity and cost associated with running an evaluation. The cost often increases linearly or quadratically with the number of models compared, restricting evaluation to a small number of models. To circumvent the cost of expert evaluation, many in the field rely on automatic metrics such as BLEU In this paper, we propose a new and simple automatic framework for the evaluation of NLG models which we call Near-Negative Distinction (NND). At a high level, the NND framework bridges the gap between expert annotation and automated metrics by repurposing existing annotations into a series of automatic tests which assess how likely a model is to avoid previously annotated errors. The first contribution of our work is the definition of the NND framework, illustrated in Figure The second contribution is the creation of NND datasets from existing human evaluations for three NLG tasks: question generation, generative question answering, and summarization. On these three tasks, verification experiments find that NND pass rates correlate better with human judgments than existing evaluation metrics, both n-gram-based metrics such as BLEU The third contribution is a collection of practical experiments showcasing how to use NND. The experiments demonstrate the flexibility of the NND framework, showing it can be useful to extrapolate a model's performance in a user study, perform fine-grain model analysis, study scaling effects in model families, or discover trends during training. Although we focus experiments on the English language, the NND framework is not Englishspecific, and we encourage the community to experiment with NND evaluation, helping to expand it to new NLG domains and languages. We publicly release the NND datasets we generated as well as the code needed to create new NND datasets, and models used in experiments | We now detail the process of transforming preexisting human annotations into an NND dataset and show how to perform NND evaluation. A human annotation dataset D consists of (context, candidate) tuples that have been annotated typically with one or more labels from a discrete error categorization. Several properties are required from human annotation datasets to be compatible with the NND framework. First, several candidates should be annotated for each context, so that pairs of candidates can be formed into unit NND tests. Second, it should be possible to map error categories to varying quality levels. For instance in Figure 1. Group By Context: Group all annotated candidates for a given context, typically each candidate originates from an NLG model. 2. Assign Quality: Assign a quality to each candidate within a group based on its annotation. 3. Generate Candidate Pairs: For a given context, construct all pairs of candidates of differing quality (C high , C low ). The difference in quality between some error categories might not be known (e.g., the difference between "Not Fluent" and "Not Factual" candidates in Figure The finalized NND dataset consists of (context, C high , C low ) triplets we call NND tests. Most text generators are language models, which assign a probability to a sequence of tokens. Sequence probability can be used during generation to rank partial candidates such as in beam search generation, however most often a generated sequence's likelihood is discarded once generation is completed. In NND, we make use of sequence likelihoods to assess whether models are likely to reproduce the mistakes of previous models, or if they can correctly assign lower likelihood to low-quality candidates. Formally, each candidate C is tokenized into a sequence of tokens: w 1 , ...w N , and a candidate's likelihood is computed in the following way: (1) where P (w i |...) is the probability assigned by the model to the i-th token of the candidate, and ct is the input context. We use log likelihood instead of likelihood, a standard step to improve numerical stability. We further choose to normalize the likelihood by the sequence length (N ) to counterbalance the effect of sequence length on likelihood. An NND test is performed by computing the likelihood of both candidates LL(C high ) and LL(C low ) and comparing both. The model passes the test if (2) In cases where the model fails the test, the error category of C low is recorded, allowing to compute NND pass rates for each category of error. By administering an entire dataset of tests, the NND produces two outputs: first an overall pass rate which is the percentage of NND tests passed by the model, and the breakdown of pass rates for each error category. The two outputs complement each other: the former can be used to compare models, and the second can be used to inspect model performance and discover model limitations. To gain an understanding of the quality of NND estimates, we run verification experiments assessing the level of correlation between NND estimates of model performance and human reference annotations. We run identical verification experiments with a set of standard NLG metrics. We design two verification experiments based on desired properties for an evaluation metric: (1) Rank Correlation, an evaluation metric should rank NLG models similarly to rankings based on human annotation, (2) Gap Correlation, a metric's estimate of gaps in performance between pairs of models should correlate positively with gaps measured through human annotation (i.e., if human annotation reveals a large gap in performance between two models, the evaluation metric should similarly estimate a large gap). For Rank Correlation, given a set of NLG models and a metric, we compute the Kendall rank correlation coefficient (τ ) For Gap Correlation, for each pair of NLG models, we compute the difference in performance according to the metric and according to human annotation. The gaps of all pairs of models are assembled into two vectors of size n 2 , and we compute the Pearson correlation of the two vectors. If a metric achieving a high Gap Correlation is well calibrated and can predict gaps in performance between two models accurately. In Section 3, we introduce NND datasets for three NLG tasks, based on pre-existing human annotations. In Section 4, we perform the verification experiments in the three domains and confirm that NND correlates better with human opinion than well-established NLG metrics. Section 5 introduces practical use-cases of NND evaluation. We first describe NND experiments for the task of Question Generation, based on Quiz Design (QD) dataset We generate NND tests by pairing No Error questions with any question with an error, producing 2,686 NND pairs in total. Examples in Table We run NND experiments with the seven models used in the original QD study (GPT2-{distil,base,med} In generative QA, a QA model receives a question and must generate a potentially abstractive answer. We create an NND dataset by re-purposing the Challenge 300 annotations We run NND experiments with three families of publicly available generative QA models: T5 finetuned on Natural Questions For summarization, we adapt two human annotation datasets to the NND framework: SummEval SummEval consists of 100 documents each with 8 to 9 system-generated summaries annotated with 5-Point Likert scale ratings on four general attributes (Consistency, Coherence, Fluency, and Relevance). We treat each attribute independently, and normalize Likert scale annotations following the SummaC benchmark procedure FRANK focuses annotation on the consistency attribute, offering more specialized error categories. The test portion of FRANK contains 350 news articles, each coupled with 4 or 5 summaries. Each summary has annotations that follow a hierarchical error categorization, breaking down consistency errors into four groups: No Error, Semantic Frame, Discourse, and Verifiability errors. 2 We treat No Error as high-quality, and any other error as lowquality, and generate 824 NND test pairs. We run NND experiments with five summarization models in the SummEval evaluation (M9, M17, M20, M22, M23) and perform a fine-grain comparison of BART-large and PEGASUS Gen. QA Summ. We now present results from running the verification experiments of Section 2.3 on the three domains we study. In our analysis, we compare NND to standard n-gram based evaluation metrics: BLEU Verification results summarized in Table We note an important conceptual difference between NND and the metrics we compare to which are reference-based. Reference-based metrics score a generator by establishing a similarity between the model's candidate outputs and human-written references. In contrast, NND is reference-less and relies on human annotations of several model candidate outputs to evaluate models. We hypothesize that the use of near-negatives, and whether a model is likely to avoid them, provides a useful signal that leads to high-quality model evaluation. We next turn to use the NND framework in practical situations and assume that NND pass rates provide quality estimates of model ranks and performance gaps between models. In Quiz Design, the largest MixQG-3B model was not included in the annotations due to latency requirements for the interface First, the three novel models all achieve strong performances, obtaining three of the best four overall NND pass rates. The MixQG-3B achieves the highest performance overall, seeing a total improvement of 2% when compared to MixQG-L, the best performer at the time of the study, with gains on all three error categories. The Macaw models achieve the strongest performance in Disfluency, but lower performance on Off Target and Wrong Context lead to lower performance overall. These results show that NND can be used to give a second life to human evaluation datasets by projecting model performance a posteriori. Prior work has recognized the BART-Large and PE-GASUS models as close contenders for top performance in summarization To gain specific insights into the differences between the models, we run NND experiments with both models using the general NND test set based on the SummEval annotations, as well as the factual consistency-focused FRANK annotations. Results are summarized in Figure On the SummEval test set, PEGASUS narrowly outperforms BART overall, owing to 4-5% gains in 2098 the consistency and fluency aspects. Performance on the coherence and relevance aspects are narrower, with BART topping coherence, and PEGA-SUS with a slight edge in relevance. The SummEval results are reaffirmed by the FRANK NND experiment, on which PEGASUS also outperforms BART overall, confirming that PEGASUS is better at avoiding factual errors than BART. However, on this more precise error categorization, PEGASUS does not win out entirely, with BART-Large achieving a higher pass rate on the Semantic Frame errors. The NND results confirm that the two models' performance is close, with overall NND pass rates within 2% of each other, yet reveal some subtlety in the specific strengths and weaknesses of each model. Depending on the application, certain attributes might be of more or less importance, and NND could inform a user on which model to select. The authors of the Challenge 300 dataset only annotated text outputs from the largest models available for each model family We run NND experiments for all model sizes available for three families of QA models: T5 finetuned on Natural Questions (Small, Large, 3B, 11B) Focusing on UnifiedQA and Macaw, model performance increases steadily in three question categories: Common Sense, Creativity, and Science, but surprisingly stagnates or decreases in the Comprehension and Entity categories. The NND experiments reveal that although performance tends to improve with model size increase, the trends vary widely by question category: an end-user with a particular question category in mind might benefit from a smaller model size. So far, we ran NND to evaluate finalized models, performing comparisons across models. We now use NND to inspect a model during training. We train a BART-base model on the CNN/DM dataset using teacher forcing with cross-entropy loss for three epochs. We perform an NND evaluation of the latest model checkpoint every 2,000 training steps, using the SummEval NND test pairs. Results summarized in Figure 6 Related Work NLG Benchmarks. Following the success of benchmarks such as GLUE LM Likelihood Score. Language-modeling likelihood and perplexity (the exponentiation of log-likelihood) are commonly used to evaluate NLG models External LM Likelihood. Besides the evaluated model's own likelihood, some work has used an external language model's likelihood for scoring. BARTScore Contrastive Learning. The use of negative candidates in NLG has been explored with recent interest in applying contrastive learning Similarly, Language Model Behavioral Analysis. Recent work has built behavioral analysis corpora Datasets Repurposing is common in machine learning and NLP Although we focus on three NLG tasks, annotations from human evaluation in other NLG tasks could be used to expand the framework further in future work, for example with the WMT MQM Flexibility of Framework. NND relies on preexisting human annotations to generate NND test pairs. However, the required annotation format is flexible, our experiments show that NND is compatible with single-error categorizations (e.g., the Quiz Design in Section 3.1), hierarchical categorizations (e.g., FRANK in Section 3.3), or Likert-scale ratings (e.g., SummEval in Section 3.3). NND results adopt the shape of the repurposed human evaluation, for instance, results in Section 5.2 are broken down both by general summarization aspects using the SummEval NND, and further refined to detailed categories with the FRANK NND. Direct Language Model Evaluation. In a typical NLG evaluation, a decoding strategy is used to generate a candidate which is evaluated. Often, authors of a model recommend a decoding strategy to pair with the model, which creates an additional confounding factor in the evaluation, as a better decoding strategy (e.g., Nucleus Sampling Holtzman et al. ( Computationally Inexpensive. Computing candidate likelihood requires a single model forward pass, through teacher forcing, whereas other automated NLG evaluations often require full candidate generation, which is computationally expensive. The low computational cost of NND enables rapid evaluation during training (Section 5.4). Limitations of NND are discussed in Section 9. We introduce the Near-Negative Distinction (NND) framework for the evaluation of NLG models. In the NND framework, a pre-existing human evaluation dataset is repurposed to create NND test pairs comprised of text candidates of differing quality. Models are evaluated on their ability to assign a higher likelihood to high-quality candidates, giving an estimate of whether models would avoid the errors of previously evaluated models. We apply the NND framework to three NLG tasks: question Reliance on Likelihood. Not all NLG models are language models capable of producing candidate likelihoods. For instance, black-box models such as GPT-3 Reliance on Prior Errors. NND relies on annotated errors of previous models to evaluate a new model, which assumes errors made by models remain constant over time. This is limiting, as each generation of models has specific strengths and weaknesses, with new categories of errors emerging over time. We recommend that NND be used as a temporary extension to a human evaluation, allowing for a few generations of models to be evaluated on the same benchmark. However, the gold standard of NLG evaluation remains human evaluation, and it should still be performed frequently, and repurposed into updated NND test sets. NND Requirements. Not all human annotations of generated texts can be repurposed for NND evaluation, and the two requirements -outlined in §2.1) -limit usability of the evaluation methodology. More precisely, annotations can be repurposed only if several model outputs are labeled for a given input, and if a partial ordering of quality over the labels is known. We however show in the paper that these requirements are common amongst existing annotation collections. Sensitivity to Normalization. A complication of the NND framework is that it relies on inputting the prior model's outputs into the evaluated model to obtain a likelihood. NLG models use different norms for punctuation and capitalization, making the exchange of generated text across models delicate. Other NLG evaluation metrics are also sensitive to un-normalized texts We focused our experiments on models and datasets for the English language, and even though we expect the NND framework to be adaptable to other languages and settings, we have not verified this assumption experimentally and limit our claims to the English language. The models and datasets utilized primarily reflect the culture of the English-speaking populace. Gender, age, race, and other socio-economic biases may exist in the dataset, and models trained on these datasets may propagate these biases. Question-answering and summarization tasks in particular have previously been shown to contain these biases. We selected question generation, question answering, and summarization as the three NLG domains on which we assessed the NND framework. We expect that the framework will be beneficial in other NLG tasks such as data-to-text, image captioning, or simplification, but have not created NND test sets for these domains and limit our claims to the three tasks we ran experiments for. We note that NND datasets are not novel datasets. Still, transformations of pre-existing human annotation datasets and proper permission to reuse underlying datasets should be granted before usage in the NND framework. Our experiments all relied on publicly released human evaluation annotations with explicit permission for research re-use. | 1,219 | 2,241 | 1,219 |
DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization | Large-scale pre-trained sequence-to-sequence models like BART and T5 achieve state-ofthe-art performance on many generative NLP tasks. However, such models pose a great challenge in resource-constrained scenarios owing to their large memory requirements and high latency. To alleviate this issue, we propose to jointly distill and quantize the model, where knowledge is transferred from the fullprecision teacher model to the quantized and distilled low-precision student model. Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16.5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets. We further pushed the limit of compression ratio to 27.7x and presented the performance-efficiency tradeoff for generative tasks using pre-trained models. To the best of our knowledge, this is the first work aiming to effectively distill and quantize sequence-to-sequence pre-trained models for language generation tasks. | Pretrained sequence-to-sequence (seq2seq) models such as BART § Equal contribution. (2020) trained a BART model with 400M parameters, while The continual growth in model sizes leads to significant demand in both computation and memory resources during inference, and poses a huge challenge on deployment, especially in real-time and/or resource-constrained scenarios. This motivates researchers to compress large pre-trained models to be smaller and faster while retaining strong performance. Among existing compression approaches such as weight-sharing Recently, Shleifer and Rush (2020) applied "shrink and fine-tune" distillation method on BART for text summarization, yet their work focuses more on the methodology for distilling text summarization only. Besides, their work did not yield a sig-nificant model footprint reduction, one of the most challenging issues in the deployment of large models in resource-constrained scenarios. In this work, we try to address the challenge of building a more efficient seq2seq model by answering two research questions: first, how well does the quantized seq2seq model perform on various tasks? Second, how do we combine quantization and distillation to push the limit of compressing the seq2seq model without significant performance losses in challenging tasks like summarization and question answering? To this end, we proposed a joint distillation and quantization framework, which efficiently transfers the knowledge from a full-precision teacher seq2seq model to its student with fewer layers and ultra-low bits for encoding its parameters. Experimental results on BART show that the proposed models reduce the model footprint by 16.5x while preserving competitive performances on multiple language generation benchmarks, and further illustrate the performanceefficiency trade-off of compressing seq2seq models up to 27.7x smaller. To the best of our knowledge, this is the first work aiming to effectively distill and quantize seq2seq pre-trained models for language generation tasks. | In this section, we consider two directions for reducing the size of our generative language model: quantization ( §2.1) and distillation ( §2.2). We apply distillation-aware training ( §2.3) to train a quantized and distilled low-precision model as a student model to emulate the full-precision teacher model. Quantization refers to the operation of mapping a real (high-precision) number to its low-precision counterpart in order to achieve model footprint reduction. There has been extensive study on applying quantization to training neural networks. Different quantization schemes include, e.g., linear quantization (e.g., Quantizing BART We applied quantization to the weights of all the hidden layers and most of the embeddings. Following previous work Weight Quantization We dive into the mathematical details of how to quantize the weights in BART models. Let us denote w t ∈ R nt as the vector obtained by stacking all the columns of the full-precision weight matrix W t that we wish to quantize at iteration t. By quantizing w t , we are looking for a scaling factor (also known as quantization step) α t and a low-precision number b t , to replace full precision weight w t with α t b t . When quantizing with more than 2 bits, we are applying the commonly used symmetric linear quantization, with where th = 2 n b -1 -1 and n b is the number of bits we use for quantization. Then b t can be obtained by b t = round(w t /α t ). When quantizing with 2 bits, we use the approximation based TWN method The second task we consider is knowledge distillation, where we train a smaller student model to mimic the behavior of a larger teacher model; specifically, we want to reproduce the output logits, attentions, and hidden states of the teacher model. Following where the RHS are MSE losses measuring the difference between the student and teacher with regard to output logits, attention scores (including encoder attention, decoder attention and cross attention), and hidden states (including all encoder and decoder layers). 1 We include the details of the loss in Appendix B for completeness. To fine-tune our quantized and distilled model, we use the technique of distillation-aware quantization with a teacher-student architecture from 1 Based on an initial small-scale study, we didn't find a significant difference between weighted and unweighted losses in our setting. For simplicity, we use unweighted loss here and leave the tuning of weights for future work. 2 Note that in this work we jointly distill and quantize encoder-decoder models, while In this section, we evaluate the efficacy of jointly Distilling and Quantizing BART (hereinafter, DQ-BART) on text summarization and long-form question answering using three benchmarks: CNN/Dai-lyMail We followed the standard splits of these datasets. The statistics could be found in Appendix C. For ELI5, we reproduced the author's implementation to train a dense retriever that retrieves 10 supporting documents from Wikipedia for each question. Additional details could be found in Appendix D. As our target is achieving efficient seq2seq generative models, we used base-sized BART for summarization and question answering tasks. For machine translation, we used mBART-large due to the lack of pretrained base-sized multilingual BART models. We reused existing models 3 , and finetuned our own models on end tasks when no open-sourced model is available. We trained our quantized-only models for 10 epochs and distilled-and-quantized models for 20 epochs. We used a batch size of 128, a learning rate of 3 × 10 -5 with 5% linear warmup, and selected the best model based on rouge-L scores on the development set. We set generative hyperparameters following previous work 0RGHO6L]H&RPSUHVVLRQ5DWLR We summarized the main results in Table 1. Direct quantization performs poorly in generation tasks. The rouge-L score drops ∼50-75% relatively compared with the baseline. 2. The performance of 8-bit distillation-aware quantized models ("8-8-8 6-6") achieves comparable or even better performance compared with the full precision models across all tasks, signaling that 8-bit is not too challenging for generative models like BART, similar to the findings for BERT We further extend our study to see how distillation and quantization work for mBART Future work may explore how to improve the performance of joint distillation and quantization for deep models under a low-bit setting. We want to understand how much gain there is when doing joint distillation and quantization compared with distillation-only method Transformer-based pre-trained seq2seq language models like BART have greatly advanced the state of the art in a range of NLP tasks. Yet, these extremely large-scale models pose a challenge in resource-constrained scenarios. To alleviate this issue, we proposed DQ-BART, a jointly distilled and quantized BART model. Empirical results show that, despite the difficult nature of language generation tasks, we achieve a 16.5x model footprint compression ratio with little performance drop on three generative benchmarks, and further present the performance-efficiency trade-off for seq2seq models up to a 27.7x compression ratio. Additionally, we studied distillation and quantization for mBART on a machine translation task, and highlighted the challenge of joint low-bit quantization with distillation for deeper models on cross-lingual tasks. To the best of our knowledge, our method is the first to apply joint quantization and distillation on pretrained language models, and this is the first work aiming to effectively distill and quantize seq2seq pretrained models for language generation tasks. We hope this work could open doors for developing and applying efficient seq2seq language models. We leave additional compression methods like attention head pruning When quantizing using 2 bits (which is also know as ternarization), following Denote ∆ as a threshold and I ∆ (x) be a function such that and denote set J ∆ = {i | I ∆ (w i ) ̸ = 0}, then according to where ⊙ is element-wise multiplication and || • || 1 is the l 1 -norm. To approximate this result, we set ∆ * = 0.7||w|| 1 /dim(w) then compute α * and b * accordingly. The distillation losses is defined as the following: In this section we'll go through each part of the losses. We denote ϕ enc (•), ϕ dec (•) as the functions that map the index of an encoder/decoder layer of the student model to the index of the teacher model layer that it is trained to emulate, the details of which is discussed in §2.2, and we use l S enc , l S dec to denote the number of encoder layers and decoder layers of the student model. To illustrate, if l S enc = 3, l S dec = 2, we would have: ϕ enc (0, 1, 2) = 0, 3, 5, ϕ dec (0, 1) = 0, 5 For simplicity, we use superscript • S , • T to distinguish counterparts from the student model and teacher model respectively. Next, we will explain the definition of each part of the distillation losses. Firstly, L logits is the Mean Squared Error (MSE) between the output logits of the student model and that of the teacher model, i.e. L logits = M SE(logits S , logits T ) Secondly, L att is the attention distillation loss, which is the sum of distillation losses of encoder attentions (EA), decoder attentions (DA), and cross attention (CA), i.e. with the subscripts i, ϕ(i) specifying the indices of the layers. Finally, L hid is the distillation loss between all the hidden states between student layers and teacher layers, which include encoder hidden states (EHS) and decoder hidden states (DHS): C Dataset Statistics In this section, we present additional details for the ELI5 dataset. We were not able to find a public version of supporting documents for ELI5, and thus followed the author's implementation 4 to train a dense retriever 4 Our trained retriever achieves a similar performance compared with the one reported in the author's implementation (recall: ours 0.3273, reported 0.3247). We use the ROUGE-SCORE package We benchmarked the performance of three randomly picked models with the "Shrink and Finetune" schema proposed in | 1,064 | 2,035 | 1,064 |
OoMMix: Out-of-manifold Regularization in Contextual Embedding Space for Text Classification | Recent studies on neural networks with pretrained weights (i.e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located. In this work, we propose a new approach, called OoMMix, to finding and regularizing the remainder of the space, referred to as out-ofmanifold, which cannot be accessed through the words. Specifically, we synthesize the outof-manifold embeddings based on two embeddings obtained from actually-observed words, to utilize them for fine-tuning the network. A discriminator is trained to detect whether an input embedding is located inside the manifold or not, and simultaneously, a generator is optimized to produce new embeddings that can be easily identified as out-of-manifold by the discriminator. These two modules successfully collaborate in a unified and end-to-end manner for regularizing the out-of-manifold. Our extensive evaluation on various text classification benchmarks demonstrates the effectiveness of our approach, as well as its good compatibility with existing data augmentation techniques which aim to enhance the manifold. | Neural networks with a word embedding table have been the most popular approach to a wide range of NLP applications. The great success of transformer-based contextual embeddings as well as masked language models Along with outstanding performances of the pretrained weight, researchers have tried to reveal the underlying structure encoded in its embedding space In this work, we propose a novel approach to discovering and leveraging the out-of-manifold for contextual embedding regularization. The key idea of our out-of-manifold regularization is to produce the embeddings that are located outside the manifold and utilize them to fine-tune the network for a target task. To effectively interact with the contextual embedding of BERT, we adopt two additional modules, named as embedding generator and manifold discriminator. Specifically, 1) the generator synthesizes the out-of-manifold embeddings by linearly interpolating two input embeddings computed from actually-observed words, and 2) the discriminator identifies whether an input embedding comes from the generator (i.e., the synthesized embed-ding) or the sequence of words (i.e., the actual embedding). The joint optimization encourages the generator to output the out-of-manifold embeddings that can be easily distinguished from the actual embeddings by the discriminator, and the discriminator to learn the decision boundary between the in-manifold and out-of-manifold embeddings. In the end, the fine-tuning on the synthesized out-of-manifold embeddings tightly regularizes the contextual embedding space of BERT. The experimental results on several text classification benchmarks validate the effectiveness of our approach. In particular, our approach using a parameterized generator significantly outperforms the state-of-the-art mixup approach whose mixing strategy needs to be manually given by a programmer. Furthermore, our approach shows good compatibility with various data augmentation techniques, since the target space we focus on for regularization (i.e., out-of-manifold) does not overlap with the space the data augmentation techniques have paid attention to (i.e., in-manifold). The in-depth analyses on our modules provide an insight into how the out-of-manifold regularization manipulates the contextual embedding space of BERT. | In this section, we briefly review two approaches to regularizing over-parameterized network based on auxiliary tasks and auxiliary data. Regularization is an essential tool for good generalization capability of neural networks. One representative regularization approach relies on designing auxiliary tasks. Another approach to network regularization is to take advantage of auxiliary data, mainly obtained by data augmentation, which eventually supplements the input data space. Inspired by Mixup In this section, we propose a novel mixup approach, termed as OoMMix, to regularize the outof-manifold in contextual embedding space for text classification. We first briefly remind the architecture of BERT, then introduce two modules used for out-of-manifold regularization, which are embedding generator and manifold discriminator. BERT is a stack of M transformer encoders pretrained on the objective of the masked language model We fine-tune the pre-trained weight to classify input texts into C classes. A classifier produces the classification probability vector o ∈ R C using the last contextual embedding h (M ) . Then, the optimization problem is defined based on a labeled dataset D = {(x 1 , y 1 ) , ..., (x N , y N )}. minimize where L kl is the Kullback-Leibler divergence and e y ∈ R C is a one-hot vector representing the label y. The function f is the whole process from h (0) to o, called a target model, and w f is the trainable parameters for the function f , including the pretrained weight of BERT and the parameters in the classifier. For notation, f can be split into several sub-processes f where h m m (x) maps the m-th contextual embedding into the m -th contextual embedding through the layers. The goal of our generator network G is to synthesize an artificial contextual embedding by taking two contextual embeddings (obtained from layer m g ) as its input. We use linear interpolation so that the new embedding belongs to the line segment defined by the two input embeddings. Since we limit the search space, the generator produces a single scalar value λ ∈ [0, 1], called a mixing coefficient. We introduce the distribution of the mixing coefficient to model its uncertainty. To this end, our generator network produces the lower bound α and the interval ∆ by using h , so as to sample the mixing coefficient from the uniform distribution U (α, α + ∆). To avoid massive computational overhead incurred by the concatenation of two input sequences The optimization problem for text classification can be extended to the new embeddings and their labels, provided by the generator network. (1) where w fm g is the trainable parameters of the function f mg (i.e., the process from h (mg) to o), and w G is the ones for the generator. Similar to other mixup techniques, we impose the mixed label on the generated embedding. We found that the supervision from the objective (1) is not enough to train the generator. The objective optimizes the generator to produce the embeddings that are helpful for the target classification. However, since the over-parameterized network tends to memorize all training data, the target model also simply memorizes the original data to minimize Equation (1). In this situation, the generator is more likely to mimic the embeddings seen in the training set (memorized by the target model) rather than generate novel embeddings. For this reason, we need more useful supervision for the generator, to make it output the out-of-manifold embeddings. To tackle this challenge, we define an additional task that identifies whether a contextual embedding comes from the generator or actual words. The purpose of this task is to learn the discriminative features between actual embeddings and generated embeddings, in order that we can easily discover the subspace which cannot be accessed through the actually-observed words. For this task, we introduce a discriminator network D that serves as a binary classifier in the contextual embedding space of the m d -th transformer layer. The discriminator takes a contextual embedding h (m d ) and calculates the score s ∈ [0, 1] which indicates the probability that h (m d ) comes from an actual sentence (i.e., h (m d ) is located inside the manifold). Its network structure is similar to that of the generator, except that the concatenation is not needed and the output of the two-layer fully connected network produces a single scalar value. As discussed in Section 3.2, any network structures for focusing on different aspects can be employed. The optimization of the generator and discriminator for this task is described as follows. (2) where L bce is the binary cross entropy loss. By minimizing this objective, our generator can produce the out-of-manifold embeddings that are clearly distinguished from the actual (in-manifold) contextual embeddings by the discriminator. We jointly optimize the two objectives to train the embedding generator. Equation (1) encourages the generator to produce the embeddings which are helpful for the target task, while Equation (2) makes the generator produce the new embeddings different from the contextual embeddings obtained from the words. The final objective is defined by where e regulates the two objectives. The generator and discriminator collaboratively search out informative out-of-manifold embeddings for the target task while being optimized with the target model, thereby the generated embeddings can effectively regularize the out-of-manifold. In this section, we present the experimental results supporting the superiority of OoMMix among the recent mixup approaches in text classification. Also, we investigate its compatibility with other data augmentation techniques. Finally, we provide in-depth analyses on our approach to further validate the effect of out-of-manifold regularization. Our experiments consider 4 sentence classification benchmarks For the various sizes of training set from 0.5K to 35K, we apply stratified sampling to preserve the balanced class distributions. In terms of optimization, we use BERT provided by huggingface for the classification tasks. We compare OoMMix with existing mixup techniques. All the existing methods manually set the mixing coefficient, whereas we parameterize the linear interpolation by the embedding generator, optimized to produce out-of-manifold embeddings. • NonlinearMix • MixText Table To demonstrate that the regularization effect of OoMMix does not conflict with that of existing data augmentation techniques, we investigate the performance of BERT that adopts both OoMMix and other data augmentations together. Using three popular data augmentation approaches in the NLP community, we replicate the dataset as large as the original one to use them for fine-tuning. • EDA Figure Moreover, OoMMix has additional advantages over the data augmentations. First, OoMMix is still effective in the case that large training data are available. The data augmentation techniques result in less performance gain as the size of training data becomes larger, because there is less room for enhancing the manifold constructed by enough training data. Second, the class label of the augmented sentences given by the data augmentation techniques (i.e., the same label with the original sentences) can be noisy for sentence classification, compared to the label of out-of-manifold embeddings generated by OoMMix. This is because the assumption that the augmented sentences have the same label with their original sentences is not always valid. On the contrary, there do not exist actual (or ground truth) labels for out-of-manifold embeddings, as they do not correspond to actual sentences; this allows our mixup label to be less noisy for text classification. We also investigate how the manifold discriminator affects the training of the embedding generator. Precisely, we compare the distributions of mixing coefficients, obtained from two different generators; they are optimized with/without the manifold discriminator, respectively (Figure The embedding generator without the discriminator gradually moves the distribution of the mixing coefficients toward zero, which means that the generated embedding becomes similar to the actual embedding. Therefore, training the generator without the discriminator fails to produce novel embeddings, which cannot be seen in the original data. In contrast, in the case of the generator with the discriminator, most of the mixing coefficients are located around 0.5, which implies that the generator produces the embeddings which are far from both the two actual embeddings to some extent. We also observe that the average objective value for our discrimination task (Equation ( We further examine the effect of the location of our generator and discriminator (i.e., m g and m d ) on the final classification performance. Figure Finally, we visualize our contextual embedding space to qualitatively show that OoMMix discovers and leverages the space outside the manifold for regularization. We apply Isomap In the yz-plane, the actual sentence embeddings form multiple clusters, optimized for the text clas-sification task. At the same time, the generated embeddings are located in the different region from the space enclosing most of the actual embeddings. In the second plot, we colorize the generated embeddings with their predicted class. The predicted class of out-of-manifold embeddings are well-aligned with that of the actual embeddings, which means that OoMMix imposes the classification capability on the out-of-manifold region as well. We change the camera view to xy-plane and repeat the same process to show the alignment of class distribution clearly (in the third/fourth plots). By imposing the classification capability on the extended dimension/subspace (i.e., out-of-manifold), OoM-Mix significantly improves the classification performance for the original dimension/subspace (i.e., in-manifold). This paper proposes OoMMix to regularize out-ofmanifold in the contextual embedding space. Our main motivation is that the embeddings computed from the words only utilize a low-dimensional manifold while a high-dimensional space is available for the model capacity. Therefore, OoMMix discovers the embeddings that are useful for the target task but cannot be accessed through the words. With the help of the manifold discriminator, the embedding generator successfully produces out-of-manifold embeddings with their labels. We demonstrate the effectiveness of OoMMix and its compatibility with the existing data augmentation techniques. Our approach is a bit counter-intuitive in that the embeddings that cannot be accessed through the actual words are helpful for the target model. As the discrete features from texts (i.e., words), embedded into the high-dimensional continuous space where their contexts are encoded, cannot cover the whole space, the uncovered space also should be carefully considered for any target tasks. In this sense, we need to regularize the out-of-manifold to prevent anomalous behavior in that space, which is especially important for a large pre-trained contextual embedding space. | 1,149 | 2,311 | 1,149 |
Towards Generative Aspect-Based Sentiment Analysis * | Aspect-based sentiment analysis (ABSA) has received increasing attention recently. Most existing work tackles ABSA in a discriminative manner, designing various task-specific classification networks for the prediction. Despite their effectiveness, these methods ignore the rich label semantics in ABSA problems and require extensive task-specific designs. In this paper, we propose to tackle various ABSA tasks in a unified generative framework. Two types of paradigms, namely annotation-style and extraction-style modeling, are designed to enable the training process by formulating each ABSA task as a text generation problem. We conduct experiments on four ABSA tasks across multiple benchmark datasets where our proposed generative approach achieves new state-of-the-art results in almost all cases. This also validates the strong generality of the proposed framework which can be easily adapted to arbitrary ABSA task without additional taskspecific model design. 1 | Aspect-based sentiment analysis (ABSA), aiming at mining fine-grained opinion information towards specific aspects, has attracted increasing attention in recent years The main research line of ABSA focuses on the identification of those sentiment elements such as extracting the aspect term In general, most ABSA tasks are formulated as either sequence-level or token-level classification problems Motivated by recent success in formulating sev-eral language understanding problems such as named entity recognition, question answering, and text classification as generation tasks In order to enable the Generative Aspect-based Sentiment analysis (GAS), we tailor-make two paradigms, namely annotation-style and extractionstyle modeling to transform the original task as a generation problem. Given a sentence, the former one adds annotations on it to include the label information when constructing the target sentence; while the latter directly adopts the desired natural language label of the input sentence as the target. The original sentence and the target sentence produced by either paradigm can then be paired as a training instance of the generation model. Furthermore, we propose a prediction normalization strategy to handle the issue that the generated sentiment element falls out of its corresponding label vocabulary set. We investigate four ABSA tasks including Aspect Opinion Pair Extraction (AOPE), Unified ABSA (UABSA), Aspect Sentiment Triplet Extraction (ASTE), and Target Aspect Sentiment Detection (TASD) with the proposed unified GAS framework to verify its effectiveness and generality. Our main contributions are 1) We tackle various ABSA tasks in a novel generative manner; 2) We propose two paradigms to formulate each task as a generation problem and a prediction normalization strategy to refine the generated outputs; 3) We conduct experiments on multiple benchmark datasets across four ABSA tasks and our approach surpasses previous state-of-the-art in almost all cases. Specifically, we obtain 7.6 and 3.7 averaged gains on the challenging ASTE and TASD task respectively. 2 Generative ABSA (GAS) | In this section, we describe the investigated ABSA tasks and the proposed two paradigms, namely, annotation-style and extraction-style modeling. Aspect Opinion Pair Extraction (AOPE) aims to extract aspect terms and their corresponding opinion terms as pairs Input: Salads were fantastic, our server was also very helpful. Target (Annotation-style): [Salads | fantastic] were fantastic here, our [server | helpful] was also very helpful. Target (Extraction-style): (Salads, fantastic); (server, helpful) In the annotation-style paradigm, to indicate the pair relations between the aspect and opinion terms, we append the associated opinion modifier to each aspect term in the form of [aspect | opinion] for constructing the target sentence, as shown in the above example. The prediction of the coupled aspect and opinion term is thus achieved by including them in the same bracket. For the extraction-style paradigm, we treat the desired pairs as the target, which resembles direct extraction of the expected sentiment elements but in a generative manner. Unified ABSA (UABSA) is the task of extracting aspect terms and predicting their sentiment polarities at the same time aims to discover more complicated (aspect, opinion, sentiment polarity) triplets Target Aspect Sentiment Detection (TASD) is the task to detect all (aspect term, aspect category, sentiment polarity) triplets for a given sentence Given the input sentence x, we generate a target sequence y , which is either based on the annotationstyle or extraction-style paradigm as described in the last section, with a text generation model f (•). Then the desired sentiment pairs or triplets s can be decoded from the generated sequence y . Specifically, for the annotation-style modeling, we extract the contents included in the bracket "[]" from y , and separate different sentiment elements with the vertical bar "|". If such decoding fails, e.g., we cannot find any bracket in the output sentence or the number of vertical bars is not as expected, we ignore such predictions. For the extractionstyle paradigm, we separate the generated pairs or triplets from the sequence y and ignore those invalid generations in a similar way. We adopt the pre-trained T5 model Ideally, the generated element e ∈ s after decoding is supposed to exactly belong to the vocabulary set it is meant to be. For example, the predicted aspect term should explicitly appear in the input sentence. However, this might not always hold since each element is generated from the vocabulary set containing all tokens instead of its specific vocabulary set. Thus, the predictions of a generation model may exhibit morphology shift from the ground-truths, e.g., from single to plural nouns. R14 R15 R16 CMLA+ Datasets We evaluate the proposed GAS framework on four popular benchmark datasets including Laptop14, Rest14, Rest15, and Rest16, originally provided by the SemEval shared challenges Evaluation Metrics We adopt F1 scores as the main evaluation metrics for all tasks. A prediction is correct if and only if all its predicted sentiment elements in the pair or triplet are correct. We adopt the T5 base model from huggingface Transformer library Baseline The main results for the AOPE, UABSA, ASTE, TASD task are reported in Tables All results are the average F1 scores across 5 runs with different random seeds. It is noticeable that our proposed methods, based on either annotation-style or extraction-style modeling, establish new state-of-the-art results in almost all cases. The only exception is on the Rest15 dataset for the AOPE task, our method is still on par with the previous best performance. It shows that tackling various ABSA tasks with the proposed unified generative method is an effective solution. Moreover, we can see that our method performs especially well on the ASTE and TASD tasks, the proposed extraction-style method outperforms the previous best models by 7.6 and 3.7 average F1 scores (across different datasets) on them respectively. It implies that incorporating the label semantics and appropriately modeling the interactions among those sentiment elements are essential for tackling complex ABSA problems. Annotation-style & Extraction-style As shown in result tables, the annotation-style method generally performs better than the extraction-style method on the AOPE and UASA task. However, the former one becomes inferior to the latter on the more complex ASTE and TASD tasks. One possible reason is that, on the ASTE and TASD tasks, the annotation-style method introduces too much content, such as the aspect category and sentiment polarity, into the target sentence, which increases the difficulty of sequence-to-sequence learning. Why Prediction Normalization Works To better understand the effectiveness of the proposed prediction normalization strategy, we randomly sample some instances from the ASTE task that have different raw prediction and normalized prediction (i.e., corrected by our strategy). The predicted sentiment elements before and after the normalization, as well as the gold label of some example cases are shown in Table We also observe that our prediction strategy may fail if the raw predictions are quite lexically different or even semantically different from the goldstandard labels (see Case #4, #7 and #8). In these cases, the difficulty does not come from the way of performing prediction normalization but the generation of labels close to the ground truths, especially for the examples containing implicit aspects or opinions (Case #4). We tackle various ABSA tasks in a novel generative framework in this paper. By formulating the target sentences with our proposed annotation-style and extraction-style paradigms, we solve multiple sentiment pair or triplet extraction tasks with a unified generation model. Extensive experiments on multiple benchmarks across four ABSA tasks show the effectiveness of our proposed method. Our work is an initial attempt on transforming ABSA tasks, which are typically treated as classification problems, into text generation problems. Experimental results indicate that such transformation is an effective solution to tackle various ABSA tasks. Following this direction, designing more effective generation paradigms and extending such ideas to other tasks can be interesting research problems for future work. | 970 | 2,128 | 970 |
Cross-language Sentence Selection via Data Augmentation and Rationale Training | This paper proposes an approach to crosslanguage sentence selection in a low-resource setting. It uses data augmentation and negative sampling techniques on noisy parallel sentence data to directly learn a cross-lingual embedding-based query relevance model. Results show that this approach performs as well as or better than multiple state-of-theart machine translation + monolingual retrieval systems trained on the same parallel data. Moreover, when a rationale training secondary objective is applied to encourage the model to match word alignment hints from a phrase-based statistical machine translation model, consistent improvements are seen across three language pairs (English-Somali, English-Swahili and English-Tagalog) over a variety of state-of-the-art baselines. | Sentence-level query relevance prediction is important for downstream tasks such as query-focused summarization and open-domain question answering; accurately pinpointing sentences containing information that is relevant to the query is critical to generating a responsive summary/answer (e.g., While we can use machine translation (MT) to translate either the query or each sentence into a common language, and then use a monolingual Information Retrieval (IR) system to find relevant sentences, work on Probabilistic Structured Queries (PSQ) For training, we treat a sentence as relevant to a query if there exists a translation equivalent of the query in the sentence. Our definition of relevance is most similar to the lexical-based relevance used in While our approach is competitive with pipelines of MT-IR, it is still sensitive to noise in the parallel sentence data. We can mitigate the negative effects of this noise if we first train a phrase-based statistical MT (SMT) model on the same parallel sentence corpus and use the extracted word alignments as additional supervision. With these alignment hints, we demonstrate consistent and significant improvements over neural and statistical MT+IR To summarize, our contributions are as follows. We (i) propose a data augmentation and negative sampling scheme to create a synthetic training set of cross-lingual query-sentence pairs with binary relevance judgements, and (ii) demonstrate the effectiveness of a Supervised Embedding-based Cross-Lingual Relevance (SECLR) model trained on this data for low-resource sentence selection tasks on text and speech. Additionally, (iii) we propose a rationale training secondary objective to further improve SECLR performance, which we call SECLR-RT. Finally, (iv) we conduct training data ablation and hubness studies that show our method's applicability to even lower-resource settings and mitigation of hubness issues | Query-focused Sentence Selection Sentencelevel query relevance prediction is important for various downstream NLP tasks such as queryfocused summarization Sentence Selection A common approach to cross-language sentence selection is to use MT to first translate either the query or the sentence to the same language and then perform standard monolingual IR As an alternative to generating full translations, PSQ Word Embeddings Crosslingual embedding methods perform cross-lingual relevance prediction by representing query and passage terms of different languages in a shared semantic space Our approach differs from previous cross-lingual word embedding methods in two aspects. First, the focus of previous work has mostly been on learning a distributional word representation where translation across languages is primarily shaped by syntactic or shallow semantic similarity; it has not been tuned specifically for cross-language sentence selection tasks, which is the focus of our work. Second, in contrast to previous supervised approaches that train embeddings directly on a parallel corpus or bilingual dictionary, our approach trains embeddings on an artificial labeled dataset augmented from a parallel corpus and directly represents relevance across languages. Our data augmentation scheme to build a relevance model is inspired by Trained Rationale Previous research has shown that models trained on classification tasks sometimes do not use the correct rationale when making predictions, where a rationale is a mechanism of the classification model that is expected to correspond to human intuitions about salient features for the decision function We first describe our synthetic training set generation process, which converts a parallel sentence corpus for MT into cross-lingual query-sentence pairs with binary relevance judgements for training our SECLR model. Following that, we detail our SECLR model and finish with our method for rationale training with word alignments from SMT. Relevant query/sentence generation. Assume we have a parallel corpus of bilingual sentence pairs equivalent in meaning. Let (E, S) be one such sentence pair, where E is in the query language (in our case, English) and S is in the retrieval collection language (in our case, low-resource languages). For every unigram q in E that is not a stopword, we construct a positive relevant sample by viewing q as a query and S as a relevant sentence. Because sentences E and S are (approximately) equivalent in meaning, we know that there likely exists a translation equivalent of q in the sentence S and so we label the (q, S) pair as relevant (i.e. r = 1). For example, one English-Somali sentence pair is E="true president gaas attend meeting copenhagen", S="ma runbaa madaxweyne gaas baaqday shirka copenhegan" (stopwords removed). By extracting unigrams from E as queries, we generate the following positive examples: (q="true", S, r = 1), (q="president", S, r = 1), (q="gaas", S, r = 1), ..., (q="copenhagen", S, r = 1). We generate the positive half of the training set by repeating the above process for every sentence pair in the parallel corpus. We limit model training to unigram queries since higher order ngrams appear fewer times and treating them independently reduces the risk of over-fitting. However, our model processes multi-word queries during evaluation, as described in Section 3.2. Irrelevant query/sentence generation. Since learning with only positive examples is a challenging task, we opt to create negative examples, i.e. tuples (q, S, r = 0), via negative sampling. For each positive sample (q, S, r = 1), we randomly select another sentence pair (E , S ) from the parallel corpus. We then check whether S is relevant to q or not. Note that both the query q and sentence E are in the same language, so checking whether q or a synonym can be found in E is a monolingual task. If we can verify that there is no direct match or synonym equivalent of q in E then by transitivity it is unlikely there exists a translation equivalent in S , making the pair (q, S ) a negative example. To account for synonymy when we check for matches, we represent q and the words in E with pretrained word embeddings. Let w q , w q ∈ R d be the embeddings associated with q and the words q ∈ E . We judge the pair (q, S ) to be irrelevant (i.e. r = 0) if: where λ 1 is a parameter. We manually tuned the relevance threshold λ 1 on a small development set of query-sentence pairs randomly generated by the algorithm, and set λ 1 = 0.4 to achieve highest label accuracy on the development set. If (q, S ) is not relevant we add (q, S , r = 0) to our synthetic training set, otherwise we re-sample (E , S ) until a negative sample is found. We generate one negative sample for each positive sample to create a balanced dataset. For example, if we want to generate a negative example for the positive example (q="meeting", S="ma runbaa madaxweyne gaas baaqday shirka copenhegan", r = 1), we randomly select another sentence pair (E ="many candidates competing elections one hopes winner", S ="musharraxiin tiro badan sidoo u tartamaysa doorashada wuxuuna mid kasta rajo qabaa guusha inay dhinaciisa ahaato") from the parallel corpus. To check whether q="meeting" is relevant to S , by transitivity it suffices to check whether q="meeting" or a synonym is present in E , a simpler monolingual task. If q is irrelevant to S , we add (q, S , r = 0) as a negative example. We propose SECLR, a model that directly makes relevance classification judgments for queries and sentences of different languages without MT as an intermediate step by learning a cross-lingual embedding space between the two languages. Not only should translation of equivalent words in either language map to similar regions in the embedding space, but dot products between query and sentence words should be correlated with the probability of relevance. We assume the training set generation process (Section 3.1) provides us with a corpus of n query-sentence pairs along with their corresponding relevance judgements, i.e. D = {(q i , S i , r i )}| n i=1 . We construct a bilingual vocabulary V = V Q ∪ V S and associate with it a matrix W ∈ R d×|V| where w x = W •,x is the word embedding associated with word x ∈ V. When the query is a unigram q (which is true by design in our training data D), we model the probability of relevance to a sentence S as: In our evaluation setting, the query is very often a phrase Q = [q 1 , . . . , q |Q| ]. In this case, we require all query words to appear in a sentence in order for a sentence to be considered as relevant. Thus, we modify our relevance model to be: Our only model parameter is the embedding matrix W which is initialized with pretrained monolingual word embeddings and learned via minimization of the cross entropy of the relevance classification task: L rel = -log p(r|q, S; W ) We can improve SECLR by incorporating additional alignment information as a secondary training objective, yielding SECLR-RT. Our intuition is that after training, the word ŝ = arg max s∈S w s w q should correspond to a translation of q. However, it is possible that ŝ simply co-occurs frequently with the true translation in our parallel data but its association is coincidental or irrelevant outside the training contexts. We use alignment information to correct for this. We run two SMT word alignment models, GIZA++ (Och and Ney, 2003) and Berkeley Aligner , such that A maps each word in the query language vocabulary to a list of document language words with different probabilities, i.e. A q,s is the probability of translating q to s and s∈V S A q,s = 1. For each relevant training sample, i.e. (q, S, r = 1), we create a rationale distribution ρ ∈ [0, 1] |S| which is essentially a re-normalization of possible query translations found in S and represents our intuitions about which words s ∈ S that q should be most similar to in embedding space, i.e. . for s ∈ S. We similarly create a distribution under our model, α ∈ [0, 1] |S| , where α s = exp (w q w s ) s ∈S exp (w q w s ) for s ∈ S. To encourage α to match ρ, we impose a Kullback-Leibler (KL) divergence penalty, denoted as: to our overall loss function. The total loss for a single positive sample then will be a weighted sum of the relevance classification objective and the KL divergence penalty, i.e. where λ 2 is a relative weight between the classification loss and rationale similarity loss. Note that we do not consider rationale loss for the following three types of samples: negative samples, positive samples where the query word is not found in the translation matrix, and positive samples where none of the translations of the query in the matrix are present in the source sentence. The parallel sentence data for training our proposed method and all baselines includes the parallel data provided in the BUILD collections of both the MATERIAL We evaluate our sentence-selection model on English (EN) queries over three collections in SO, SW, and TL recently made available as part of the IARPA MATERIAL program. In contrast to our training data which is synthetic, our evaluation datasets are human-annotated for relevance between real-world multi-domain queries and documents. For each language there are three partitions (Analysis, Dev, and Eval), with the former two being smaller collections intended for system development, and the latter being a larger evaluation corpus. In our main experiments we do not use Analysis or Dev for development and so we report results for all three (the ground truth relevance judgements for the TL Eval collection have not been released yet so we do not report Eval for TL). See Table While our model and baselines work at the sentence-level, the MATERIAL relevance judgements are only at the document level. Following previous work on evaluation of passage retrieval, we aggregate our sentence-level relevance scores to obtain document-level scores We initialize English word embeddings with word2vec Cross-Lingual Word Embeddings. We compare our model with three other cross-lingual embedding methods, Bivec since these models are optimized for this comparison function MT+IR. We also compare to a pipeline of NMT To implement the PSQ model of Multilingual XLM-RoBERTa. We compare our model to the cross-lingual model XLM-RoBERTa We report Mean Average Precision (MAP) of our main experiment in Table While MT+IR is a competitive baseline, it is consistently outperformed by PSQ across all test conditions, suggesting that in low-resource settings it is not necessary to perform full translation to achieve good sentence selection performance. SMT, PSQ, and SECLR-RT all make use of the same word-alignment information but only SMT generates translations, adding additional evidence to this claim. PSQ and SECLR are close in performance on Analysis and Dev sets with SECLR eking out a slight advantage on seven of 12 Anaylsis/Dev set conditions. On the larger Eval partitions, it becomes clearer that PSQ is superior to SECLR, suggesting that the relevance classification objective is not as informative as word alignment information. The relevance classification and trained rationale objectives capture slightly different information it seems; SECLR-RT, which uses both, out-performs PSQ across all 16 test conditions. In Section 5, we have shown that SECLR-RT consistently out-performs all baselines across all languages. Since this work targets cross-language sentence selection in a low-resource setting, we perform a training data ablation study to understand how training data size affects effectiveness. We performed the ablation study for our two models SECLR and SECLR-RT, and the two strongest baseline methods PSQ and SID-SGNS. To simulate further the scenario of data scarcity, we sub-sampled our parallel corpus uniformly at random for 5%, 10%, 25%, 50% of the sentence pairs of the original corpus. Each sentence pair in the parallel corpus is sampled with equal probability regardless of sentence length. For consistency, for each sample size, the same sampled parallel corpus is used across all models. The word alignment probability matrix used by PSQ and SECLR-RT is generated from the same sampled corpus. Since we tune the vocabulary size on the Dev set, for fair comparison we only report MAP scores on the Analysis and Eval sets. We plot MAP scores of the four models as a function of percentage of data sampled in Figure In the low-resource setting when the sample size is 5% or 10%, SECLR consistently underperforms other models, confirming our observation that SECLR is sensitive to noise and vulnerable to learning co-occurrences of word pairs that are in fact irrelevant. When the sample size is 5% or 10%, PSQ consistently achieves better performance than SID-SGNS and SECLR (although still under-performing SECLR-RT), indicating that alignment-based methods are more robust to noise and especially useful when data is extremely scarce. The fact that SECLR-RT consistently out-performs SECLR by a wide margin for small sample sizes indicates the necessity and effectiveness of incorporating alignment-based information into SECLR to improve the robustness of the model and learn more precise alignments. In this section, we show that by incorporating alignment information through rationale training, SECLR-RT significantly alleviates the hubness problem present in the trained cross-lingual embedding space produced by SECLR. Previous research on cross-lingual word embeddings has observed that a high-dimensional representation space with a similarity-based metric often induces a hub structure The hub structure is problematic in IR since the hub vectors are often wrongly predicted as relevant and similar in meaning to queries that are in fact irrelevant Following We report S N 10 scores for SECLR and SECLR-RT respectively in Table In this work, we presented a supervised crosslingual embedding-based query relevance model, SECLR, for cross-language sentence selection and also applied a rationale training objective to further increase model performance. The resulting SECLR-RT model outperforms a range of baseline methods on a cross-language sentence selection task. Study of data ablation and hubness further indicate our model's efficacy in handling lowresource settings and reducing hub structures. In future work, we hope to apply our sentence-level query relevance approach to downstream NLP tasks such as query-focused summarization and opendomain question answering. When we train SECLR and SECLR-RT via data augmentation, we randomly split the parallel corpus into train set (96%), validation set (3%) and test set (1%). We then use the dataset augmentation technique introduced in Section 3.1 to generate positive and negative samples for each set. Augmenting the dataset upon the split corpus allows us to achieve more independence between train/validation/test set compared to splitting the dataset augmented on the entire parallel corpus. Note that we only use the validation set for early stopping but we do not tune hyperparameters with the validation set. We preprocess the parallel corpus, the query collection and the sentence collection with the Moses toolkit In this section we demonstrate some examples from the MATERIAL dataset used for evaluation. Example queries include: "evidence", "human rights", "chlorine", "academy", "ratify", "constitution", "carnage" and "Kenya". On average only 0.13% of the documents in the Eval collection are relevant to each query, which makes the task hard. Here are two examples from Somali Analysis text. Because the documents are long, here we only include the relevant segment of a long relevant document. In the first example, the English query is "contravention" and the relevant segment of a long relevant document (translated from Somali to English by human) is "the security forces captured military equipment coming into the country illegally." This segment is relevant to the query because of the word "illegally". Here is another example where the the English query is "integrity". The relevant segment of a long relevant document (translated from Somali to English by human) is "Hargeisa (Dawan) -Ahmed Mohamed Diriye (Nana) the member of parliament who is part of the Somaliland house of representatives has accused the opposition parties (Waddani and UCID) of engaging in acts of national destruction, that undermines the existence and sovereignty of the country of Somaliland." This segment is relevant to the query because of the word "sovereignty". Since there are multiple ways to translate a word and since MT performance is relatively poor in lowresource settings, the task is far more challenging than a simple lexical match between queries and translated documents. In this section we include extra implementation and experiment details that are not included in the main paper. Information already included in the main paper are not repeated here for conciseness. We train our SECLR and SECLR-RT models on Tesla V100 GPUs. Each model is trained on a single GPU. We report training time of SECLR and SECLR-RT on Somali, Swahili and Tagalog in As is discussed in Section 3.2, the only trainable model parameters of SECLR and SECLR-RT are the word embedding matrices. Thus, SECLR and SECLR-RT have the same number of model parameters. We report the number of trainable parameters of both models on Somali, Swahili and Tagalog in Table Our SMT system uses the following feature functions: phrase translation model, distance-based reordering model, lexicalized reordering model, 5-gram language model on the target side, word penalty, distortion, unknown word penalty and phrase penalty. We use backtranslation in earlier versions of MT systems. Following previous work Later, we discover that decoder pretraining with monolingual data achieves better performance compared to backtranslation. The decoder pretraining scheme we use now is most similar to the paper by There is no WMT benchmark for Somali, Swahili or Tagalog, but we use state-of-the-art techniques in our MT systems. We have also experimented with the bilingual data selection method (Junczys-Dowmunt, 2018). However, this technique does not work well, mostly because lowresource MT systems are not good enough to do scoring. In this section we include extra experimental results that are not included in the main text due to limited space. When we are designing the SECLR model, we experiment with adding LSTMs and using the dot product between LSTM hidden states to compute pairwise similarity between the query and the sentence. We report MAP scores of SECLR with LSTM in Table | 777 | 1,920 | 777 |
Enhancing Extreme Multi-Label Text Classification: Addressing Challenges in Model, Data, and Evaluation | Extreme multi-label text classification is a prevalent task in industry, but it frequently encounters challenges in terms of machine learning perspectives, including model limitations, data scarcity, and time-consuming evaluation. This paper aims to mitigate these issues by introducing novel approaches. Firstly, we propose a label ranking model as an alternative to the conventional SciBERT-based classification model, enabling efficient handling of largescale labels and accommodating new labels. Secondly, we present an active learning-based pipeline that addresses the data scarcity of new labels during the update of a classification system. Finally, we introduce ChatGPT to assist with model evaluation. Our experiments demonstrate the effectiveness of these techniques in enhancing the extreme multi-label text classification task. | Extreme Multi-label Text Classification (XMTC) refers to the task of assigning to each document its most relevant labels from a taxonomy, where the number of labels could reach hundreds of thousands or millions However, the existing approaches often face inherent challenges pertaining to the model, data, and evaluation aspects. First, classification models typically serve as the default choice for this task In this work, we aim to replace our existing classification pipeline with a new solution that addresses the aforementioned issues. First, we introduce a label ranking model to replace the SciBERTbased classification model used in production. This new model comprises a Bi-Encoder model and a Cross-Encoder model We assess our pipeline's performance by considering model effectiveness, training costs, and manual annotation costs. The predicted labels of our pipeline exhibit greater correctness and specificity compared to the production baseline. For a newly introduced label, it requires on average 100 human-annotated samples for the updated model to achieve a Recall@10 of 0.8. Additionally, with the help ChatGPT, SMEs' annotation effort is reduced from 15 mins to 5 mins for annotating a single document with 10 labels. As a result, our proposed pipeline enables multiple releases within a single year, significantly enhancing efficiency and productivity. | In the field of multi-label text classification, numerous studies have contributed to the development of effective models and techniques We introduce a label ranking model to replace the SciBERT-based model in our cooperative production. It comprises a Bi-Encoder model and a Cross-Encoder model. The Bi-Encoder model offers benefits such as high recall and low computational cost, while the Cross-Encoder model enhances precision by re-ranking the top documents. See Figure A Bi-Encoder model An important detail during the training of the Bi-Encoder is to keep the same labels out of the same batch since the MultipleNegativesRank-ingLoss uses the other samples in the batch as negative examples. Therefore, if a label appears more than once it will create confusion due to samples from the same label acting as negative samples for each other. Industry taxonomies are dynamic, with new classes added and existing ones removed over time. Consequently, reclassifying existing and future documents using the updated taxonomy becomes necessary. The current standard practice involves fully retraining classification models from scratch after a taxonomy change, which is computationally inefficient and costly. In this section, we illustrate a significant advantage of our label ranking model, as it allows for the introduction of new classes into the taxonomy without requiring full model retraining. In the context of introducing a new label into a taxonomy, Active Learning (AL) provides an efficient approach to obtain labeled samples by iteratively learning from existing labeled samples and selecting unlabeled samples for annotation based on an acquisition strategy. We perform the cold-start pool-based AL Here we ask the oracle a binary question, i.e. given an unlabeled sample u, does it belong to class c. A model M iteratively learns a set of new labels C new via the AL cycle as described in Algorithm 1. U ← U \ U s 6: end for In our corpus, a document can have multiple labels and therefore every document in the corpus is a potential candidate for newly introduced labels. The challenge here is that the corpus U corpus has more than 14M documents and this requires practically infeasible computational resources to do model inference at each iteration of AL (line 3 in Algorithm 1). To address this challenge, we propose an alternative approach that utilizes a separate Bi-Encoder model to retrieve a relatively small number of potentially relevant documents, which serve as the unlabeled samples U , such that |U | ≪ |U corpus |. To train the separate Bi-Encoder model, we select a random sample of 80K documents from the domain of the new labels, and then use the unsupervised domain adaptation method GPL The acquisition strategy is the key area of research within AL, however, these strategies are mostly based on classification-based models This strategy is greedy because we are forcing positive examples to be chosen for a given label c. In this scenario it is a valid heuristic, because the Bi-Encoder component in M learns using the MultipleNegativesRanking loss and this loss uses positive pairs as its input. So, it is necessary for our training process to find positive pairs between label and documents Algorithm 2 Greedy Acquisition Strategy S Our first issue in training the model M is catastrophic forgetting, a phenomenon that occurs when learning new labels For the Cross-Encoder component of M, we train it continuously together with the Bi-Encoder at each iteration. We first get the top k ranked documents from the updated Bi-Encoder, and then use the true label given by the oracle as a positive example and randomly sample 3 labels as the negatives, as mentioned in Section 3.1. The absence of a test set presents a common challenge for offline evaluation. However, creating a test set can be a time-consuming task. For instance, providing SMEs with a single document and 10 labels can take approximately 15 minutes for annotation. The major reason is that SMEs are typically proficient in only one or two domains, and there is no expert who possesses knowledge across all domains. Even domain experts may lack comprehensive knowledge of highly specialized topics, making it difficult to precisely determine the relevance of a label to a given document. While ChatGPT has shown great potential to help data annotation in NLP To address these challenges, we leverage Chat-GPT as an assisting evaluation tool. We begin by generating prompts for the documents that require annotation and employ ChatGPT to provide label relevance scores (0=irrelevant, 1=somewhat relevant, or 2=highly relevant) along with explanations for these scores. Table To facilitate efficient model updates and data annotation, we have developed a web application (Figure At the beginning of the AL process, the BiCross-Encoder model provides a list of ranked documents by relevancy. These documents are shown one by one to all users without repetition. The users will be able to decide if the label matches the content of the abstract. Once a batch of positive results (label matches abstract) is obtained, it is sent to the model for training, and a new list of ranked abstracts is provided. The application's asynchronous nature ensures that users are unaffected by any time delays caused by these model processes. Additionally, user responses and time spent on annotations are stored and linked to project and abstract data. Prompt Which of the following 0. Fuzzy neural networks ... are relevant topics for this abstract. For each just provide a relevance score between 0 and 2, and an explanation. 0 means not relevant and 2 means highly relevant. -> TITLE: ... ABSTRACT: ... the determination of the rail voltage for a 1500 V DC-fed rail system by means of the adaptive neuro-fuzzy inference system ... Response 0. Fuzzy neural networks: 2 -The study uses an adaptive neuro-fuzzy inference system (ANFIS), which combines fuzzy logic and neural networks ... The application also allows users to track model performance during their annotation, once they are satisfied with the performance they can terminate model training. 5 Experimental Setup Labels. The labels assigned to documents are derived from Elsevier's Compendex taxonomy, which encompasses approximately 11,486 labels from the generic engineering domain. This taxonomy exhibits a poly-hierarchy structure, wherein certain leaf nodes can have multiple parent nodes. The taxonomy undergoes regular updates, typically on an annual basis. These updates involve the addition of new labels and the potential removal of existing ones to ensure their accuracy over time. Corpus. The corpus we work with contains about 14M documents of interdisciplinary engineering content. Each document has a title, an abstract, keywords, and some meta information; it is associated with several labels generated by a rule-based fuzzy string matching system. We use the concatenation of title, abstract, and keywords to encode the documents. Document pool (DP) dataset. It consists of relevant and irrelevant documents for 7 taxonomy labels. For each label, the dataset contains between 250 and 450 documents (mean=363), which were manually annotated as relevant or irrelevant (mean=150 relevant documents). The irrelevant documents are mainly hard negatives. Active learning (AL) dataset. Out of the 11,486 labels in the taxonomy, we randomly chose 30 labels to represent the newly introduced labels. Next, we utilized the GPL Bi-Encoder to select 1,000 samples for each concept from the corpus, resulting in an unlabeled pool of data comprising 30,000 samples. Additionally, we randomly selected a total of 5,000 documents from the dataset to form the test set for the 30 selected labels. Production model. The production model is a SciBert-based multi-label classification model, with a classification layer on top of the [CLS] output of the pre-trained SciBert model (allenai/scibert_scivocab_uncased). The classification model was finetuned using the MultiLabel-SoftMarginLoss on a 2M documents subset of our 14M corpus, with taxonomy labels generated by a rule-based system. In this experiment, we aim to answer whether our label ranking model outperforms the classification model for extremely large label scenarios. The BiCross-Encoder model was trained on the 14M documents with weak labels generated by a rulebased system. The evaluation was done automatically using ChatGPT. We first select 22 documents from each of the 4 domains, i.e. communication, natural science, material science, and computer science; then we do inference using both models to produce a rank list from the 11,486 labels. We keep the top 10 labels and ask ChatGPT to answer whether the label is relevant to the corresponding document or not. In Table A natural question about ChatGPT that readers might come up with is whether it is reliable for automatic evaluation. We manually ask SME to examine the answers (0, 1, or 2) from ChatGPT and give their own answer if the ChatGPT answer is not correct. The percentage of agreement is 60% on the original 3-point scale and 82% on a 2-point scale (mapping 1 and 2 as 1). The relatively low agreement from the 3-point scale is because of confusion between 1 (somewhat relevant) and 2 (highly relevant). Given that a 2-point scale is enough for most relevant tasks, we conclude that using ChatGPT for evaluation is acceptable if we are faced with limited time and monetary budget for annotation. In this experiment, we use the DP dataset to evaluate the ranking performance of the GPL-finetuned Bi-Encoder, which we use for selecting the initial document pool of potentially relevant documents. Figure To sum up, the finetuned model is well capable of selecting a set of relevant documents for a given label, consequently benefiting the efficiency of the AL loop. In this experiment, we show the results of AL the 30 newly introduced labels. Here we used a Bi-Encoder trained on the "old" labels and the distilroberta-base Cross-Encoder off the-shelf. The results are shown in Figure First, by training the Cross-Encoder to re-rank the Bi-Encoder label rankings, we observed a performance boost of approximately 15 points, resulting in a Recall@10 of 0.85. Second, the performance improvement was achieved with just 100 iterations. It is noteworthy that each iteration involved, on average, only 1 or 2 newly labeled samples, summing up to 100 samples per new label. This indicates that combining the selection of the initial pool via GPL and the greedy acquisition strategy together is a successful heuristic for newly introduced labels, especially in low-budget scenarios. Table Table The result indicates that the model's performance remained consistent with the old labels even after applying AL. Surprisingly, the model's performance even exhibited a significant improvement. This finding confirms the efficacy of incorporating data replay as an effective countermeasure against catastrophic forgetting. Additionally, the integration of data replay in the Bi-Encoder model allowed it to learn the relation between the new and old labels in its semantic space. As a result, the embeddings between the old labels were better defined, leading to the observed enhanced performance following AL on the new classes. In this work, we propose an approach to enhance our pipeline for the extreme multi-label text classification task. We replace the traditional SciBERTbased classification model with a label ranking model based on a Bi-Encoder and a Cross-Encoder, enabling efficient handling of large-scale labels. Moreover, we present an active learning-based pipeline that addresses the data scarcity of new labels during the update of a classification model. Finally, we demonstrate the effectiveness of using ChatGPT for model evaluation when faced with limited time and monetary budget for annotation. One of the limiting factors during the AL cycle is that our acquisition strategy is a greedy method. The acquisition strategies in existing works usually depend on the classification head and embedding space of a given model, which may not be directly compatible with our ranking-based model. A direction for future research would be looking at acquisition strategy for ranking-based models. Another limitation is that in the AL cycle, only the positively annotated samples by the oracle are used for training the model. This is not entirely efficient because the negatively annotated samples are not used, while they also cost resources. A possible solution is to have a different loss that incorporates these negatively annotated samples during training. Another solution is to change the task of the oracle to give all the categories a sample belongs to. | 839 | 1,372 | 839 |
Beware of Model Collapse! Fast and Stable Test-time Adaptation for Robust Question Answering | Although pre-trained language models (PLM) have achieved great success in question answering (QA), their robustness is still insufficient to support their practical applications, especially in the face of distribution shifts. Recently, testtime adaptation (TTA) has shown great potential for solving this problem, which adapts the model to fit the test samples at test time. However, TTA sometimes causes model collapse, making almost all the model outputs incorrect, which has raised concerns about its stability and reliability. In this paper, we delve into why TTA causes model collapse and find that the imbalanced label distribution inherent in QA is the reason for it. To address this problem, we propose Anti-Collapse Fast test-time adaptation (Anti-CF), which utilizes the source model's output to regularize the update of the adapted model during test time. We further design an efficient side block to reduce its inference time. Extensive experiments on various distribution shift scenarios and pre-trained language models (e.g., XLM-RoBERTa, BLOOM) demonstrate that our method can achieve comparable or better results than previous TTA methods at a speed close to vanilla forward propagation, which is 1.8× to 4.4× speedup compared to previous TTA methods. Our code is available at | Pre-trained language models (PLMs) have achieved great success on many NLP tasks To address this problem, researchers have proposed many approaches such as adversarial training To solve this problem, we take QA task as an example and investigate why TTA causes the model collapse. Our experiments indicate that the main reason for the model collapse is the imbalanced label distribution of the test data. In contrast to the direct inference, TTA exacerbates this imbalanced distribution, making all outputs of the model to be a specific class. Therefore, we propose Anti-Collapse Fast test-time adaptation (Anti-CF), which utilizes the output of the source model as a soft label to regularize the update of the adapted model during test time to ensure that the adapted model will not deviate too far from the source model, thus avoiding model collapse. However, to obtain the output of the source model and the adapted model, we need to keep the parameters of two models and conduct forward propagation twice, which will bring a lot of additional costs in practical applications. Therefore, we freeze the source model and add an efficient side block as the adapted model to reduce the cost of additional forward propagation and back propagation. Extensive experiments on various distribution shift scenarios and PLMs demonstrate that our method can achieve comparable or better results than previous TTA methods at a speed close to vanilla forward propagation, which is 1.8× to 4.4× speedup compared to previous TTA methods. Overall, our contributions in this work include: • We investigate why TTA causes model collapse in QA and find that the imbalanced label distribution inherent in QA is the reason for it. | In this section, we begin by introducing extractive question answering and the application of TTA to enhance its robustness. Subsequently, we focus on Tent In extractive QA, the input of the model is a combination of a context and a question. The goal is to determine the start and end positions of the answer within the context, where the text between them represents the answer. However, in practice, the context-question pairs are often too long to be directly processed by the model. To address this, we divide the context into smaller spans. For each span, the model predicts the start and end positions of the answer within that specific span. In cases where the model determines that the answer does not exist within a given span, it will output the start and end positions of a special token, such as the where p (y c |x t ) is the probability of the c-th category of x t . We use XLM-RoBERTa-base To explore why TTA causes model collapse, we study the entropy of test data. As Figure In this section, we propose Anti-collapse Fast Testtime adaptation (Anti-CF). Anti-CF consists of two strategies. (1) Entropy minimization with source constraints (Section 3.1) seeks to ensure that the adapted model does not deviate too far from the source model, thus avoiding the occurrence of model collapse. (2) Efficient side block (Section 3.2) aims to reduce the inference time by building a small network next to the backbone. To solve the problem we mentioned in Section 2.2, we want to use the output of the source model as a constraint to the adapted model during test time so that the adapted model does not deviate too far from the source model, thus avoiding model collapse. Like many previous TTA methods, we also choose entropy minimization as one of our optimization goals, which can be formulated as: where {x} n i is a batch of test samples and p a (y c |x i ) is the prediction probability of x i given by the adapted model. We use forward Kullback-Leibler (KL) divergence to constrain the update of the adapted model, which will make the output of the adapted model close to that of the source model: where p s (y c |x t ) is the probability of the c-th category given by the source model. We introduced a hyper-parameter α to balance the two losses, so the loss function of Anti-CF is: We can briefly analyze why Anti-CF can avoid model collapse. Suppose model collapse has occurred, with extremely low entropy like that in Section 2.2. At this point, the first part of the loss L e is close to 0, and the loss approximately has only the second part L c . Therefore, the main objective of the loss is to pull the adapted model closer There is an adapter added between every two layers. Gradients only propagate through adapters, and the backbone is frozen. The adapted output will be used as the final output for prediction, while the source output is only used to constrain the update of the efficient side block. to the source model, which effectively avoids the occurrence of model collapse. To minimize Eq.4, we need to obtain the predicted probability of the source and adapted model. However, this requires at least two forward propagation and one back propagation for each sample, which undoubtedly dramatically increases the cost of practical application. To break this dilemma, we propose an efficient side block, which is plugged into the backbone as the adapted model so that we only need one forward propagation to obtain the two outputs simultaneously. In addition, the gradient only back propagates through the efficient side block, reducing the cost of back propagation. As shown in Figure (5) where h k is the hidden state of the k-th Transformer layer, s is the hidden state of the i-th adapter, i ranges from 1 to the number of layers in the efficient side block. Both h 0 and s 0 are initialized as embedding outputs. For example, the XLM-RoBERTa-large has 24 Transformer layers, we take 12 adapter modules as the side block. When a sample is given, since the backbone and side block are parallel, only one forward propagation is needed to obtain the output of the source model and the adapted model. During back propagation, the backbone is frozen and only the parameters of the efficient side block are updated, which prevents gradient propagation in the backbone, thus significantly accelerating the backpropagation speed. Since the efficient side block is additionally plugged into the backbone in the TTA phase, it is not trained in the training phase. Thus, its parameters are randomly initialized. We believe that the efficient side block without learning task-specific information may cause performance degradation of TTA, so we train the efficient side block before performing TTA, which we call the warmup process. Since the warm-up phase only learns task-related information, the warmup data can be either the training data of the original model or other available data of the same task. To verify the effectiveness of our proposed Anti-CF, we conduct experiments in three distribution shift scenarios: adversarial attack, cross-lingual, and cross-domain. Datasets we use as the following: NoiseQA We use the following strong baselines as a comparison to verify the effectiveness of Anti-CF. Tent In our main experiments, we utilize the XLM-RoBERTa-base/large as the backbone model. In addition, we use xTune For all baselines, to speed up TTA as much as possible, we follow the setup of For Anti-CF, we set the adapter's hidden size the same as the source model's hidden size. Unlike the setting in OIL, we believe that TTA should not select a set of hyper-parameters for each test set individually because in a complex and variable realworld scenario, we cannot make a careful hyperparameter selection for each distribution shift. We run all experiments with different random seeds three times and take the averaged result as the final experimental results. We tune the model with the learning rate in {5e-5, 1e-4, 5e-4} and set the batch size as 8. We use the validation set of SQuAD to warmup the efficient side block for one epoch with the learning rate of 5e-4. All experiments are completed on NVIDIA RTX 3090 GPU. Details of all hyper-parameters are given in Appendix B. Table Anti-CF has superior inference speed among all TTA methods. Anti-CF is about 1.8 times faster than Tent, two times faster than EATA, 4.4 times faster than OIL, 3.4 times faster than SAR, and only about 20% slower than vanilla forward. This speed is faster than all existing TTA methods and has a vast advantage in real-world applications. Anti-CF can achieve comparable or better results than other TTA methods. On the NoiseQA, XQuAD, and MLQA datasets, each TTA method performs well and can achieve performance improvements based on the source model. Anti-CF can achieve comparable or better results than other TTA methods without model collapse. Among them, when using xlmr-large as the source model, the EM of Anti-CF is 5.98% higher than that of vanilla forward and 0.94% higher than the best performance among other TTA methods on NoiseQAsyn. On average, Anti-CF has a stable improvement effect on all source models. Anti-CF has great potential in real-world applications. Previously, TTA was challenging to apply in real-world applications due to its instability. Although it achieves performance improvements, it will also sometimes causes the model to collapse, resulting in almost all output being false, which is unacceptable in real-world applications. Anti-CF can avoid it. In addition, many existing TTA methods are becoming increasingly complex, incorporating technologies such as contrastive learning 5 Further Analysis The learning rate is a very important hyperparameter of TTA. α is an important hyper-parameter of Anti-CF, significantly influencing the results. To thoroughly investigate its impact, we conduct experiments on the NaturalQA dataset. Figure We explore the impact of the amount of warmup data on Anti-CF. We use xlmr-large as the source model on the NoiseQA-syn dataset and conducted experiments with different amounts of warmup data. We randomly sample the warmup data from the validation set of SQuAD. As shown in Figure 5.4 Memory Usage for TTA Methods. In practical applications, TTA requires additional memory, which poses challenges when deploying on lightweight devices with limited memory resources. We discover that the efficient side block of Anti-CF can potentially solve this problem by reducing the memory required for back propagation. To demonstrate this, we record the memory required by each TTA method in Figure With the recent amazing progress in generative large language models (LLMs), generative QA is becoming increasingly valuable for research and application. In light of this, we also investigate the potential of TTA, especially Anti-CF in it. We train Test-Time Adaptation Test-Time Adaptation (TTA) is a promising paradigm to deal with the distribution shift. TTA uses self-supervised signals to update the model at the inference stage. It has achieved surprising performance in various tasks Robustness NLP Training a sufficiently robust model is a prerequisite for the practical application of a trustworthy NLP system. In this paper, we attempt to improve the robustness of QA models by testing time adaptation (TTA) but find that TTA causes the models collapse. We thoroughly investigate why previous TTA methods cause the model collapse and find that the imbalanced label distribution is the main reason. We address this problem by adding constraints between the source and adapted model during the TTA process. We also design an efficient side block to speed up the inference time. Sufficient experimental results show that our proposed method is effective and efficient, making TTA a big step closer to being applied in real-world scenarios. Although our proposed Anti-CF has made significant progress in terms of stability and inference efficiency compared with existing TTA methods, there are still some limitations: • Anti-CF constrains the adapted model's prediction with the source model's prediction, which prevents TTA from model collapse and effectively improves the lower bound of the model performance. However, the source model tends to perform poorly under distribution shift, and a strong constraint similar to KL divergence can limit the upper bound of the model performance. The experimental results in Table | 1,292 | 1,711 | 1,292 |
Large Language Models in Machine Translation | This paper reports on the benefits of largescale statistical language modeling in machine translation. A distributed infrastructure is proposed which we use to train on up to 2 trillion tokens, resulting in language models having up to 300 billion n-grams. It is capable of providing smoothed probabilities for fast, single-pass decoding. We introduce a new smoothing method, dubbed Stupid Backoff, that is inexpensive to train on large data sets and approaches the quality of Kneser-Ney Smoothing as the amount of training data increases. | Given a source-language (e.g., French) sentence f , the problem of machine translation is to automatically produce a target-language (e.g., English) translation ê. The mathematics of the problem were formalized by where {h m (e, f )} is a set of M feature functions and {λ m } a set of weights. One or more feature functions may be of the form h(e, f ) = h(e), in which case it is referred to as a language model. We focus on n-gram language models, which are trained on unlabeled monolingual text. As a general rule, more data tends to yield better language models. Questions that arise in this context include: (1) How might one build a language model that allows scaling to very large amounts of training data? (2) How much does translation performance improve as the size of the language model increases? (3) Is there a point of diminishing returns in performance as a function of language model size? This paper proposes one possible answer to the first question, explores the second by providing learning curves in the context of a particular statistical machine translation system, and hints that the third may yet be some time in answering. In particular, it proposes a distributed language model training and deployment infrastructure, which allows direct and efficient integration into the hypothesis-search algorithm rather than a follow-on re-scoring phase. While it is generally recognized that two-pass decoding can be very effective in practice, single-pass decoding remains conceptually attractive because it eliminates a source of potential information loss. | Traditionally, statistical language models have been designed to assign probabilities to strings of words (or tokens, which may include punctuation, etc.). Let w L 1 = (w 1 , . . . , w L ) denote a string of L tokens over a fixed vocabulary. An n-gram language model assigns a probability to w L 1 according to (2) where the approximation reflects a Markov assumption that only the most recent n -1 tokens are relevant when predicting the next word. For any substring w j i of w L 1 , let f (w j i ) denote the frequency of occurrence of that substring in another given, fixed, usually very long target-language string called the training data. The maximum-likelihood (ML) probability estimates for the n-grams are given by their relative frequencies . (3) While intuitively appealing, Eq. ( In principle, the predictive accuracy of the language model can be improved by increasing the order of the n-gram. However, doing so further exacerbates the sparse data problem. The present work addresses the challenges of processing an amount of training data sufficient for higher-order n-gram models and of storing and managing the resulting values for efficient use by the decoder. The topic of large, distributed language models is relatively new. Recently a two-pass approach has been proposed More recently, a large-scale distributed language model has been proposed in the contexts of speech recognition and machine translation Both approaches differ from ours in that they store corpora in suffix arrays, one sub-corpus per worker, and serve raw counts. This implies that all workers need to be contacted for each n-gram request. In our approach, smoothed probabilities are stored and served, resulting in exactly one worker being contacted per n-gram for simple smoothing techniques, and in exactly two workers for smoothing techniques that require context-dependent backoff. Furthermore, suffix arrays require on the order of 8 bytes per token. Directly storing 5-grams is more efficient (see Section 7.2) and allows applying count cutoffs, further reducing the size of the model. State-of-the-art smoothing uses variations of context-dependent backoff with the following scheme: where ρ(•) are pre-computed and stored probabilities, and λ(•) are back-off weights. As examples, Kneser-Ney Smoothing In general, the backoff factor α may be made to depend on k. Here, a single value is used and heuristically set to α = 0.4 in all our experiments with N being the size of the training corpus. Stupid Backoff is inexpensive to calculate in a distributed environment while approaching the quality of Kneser-Ney smoothing for large amounts of data. The lack of normalization in Eq. ( We use the MapReduce programming model Our system generates language models in three main steps, as described in the following sections. Vocabulary generation determines a mapping of terms to integer IDs, so n-grams can be stored using IDs. This allows better compression than the original terms. We assign IDs according to term frequency, with frequent terms receiving small IDs for efficient variable-length encoding. All words that occur less often than a pre-determined threshold are mapped to a special id marking the unknown word. The vocabulary generation map function reads training text as input. Keys are irrelevant; values are text. It emits intermediate data where keys are terms and values are their counts in the current section of the text. A sharding function determines which shard (chunk of data in the MapReduce framework) the pair is sent to. This ensures that all pairs with the same key are sent to the same shard. The reduce function receives all pairs that share the same key and sums up the counts. Simplified, the map, sharding and reduce functions do the following: Note that the Reduce function emits only the aggregated value. The output key is the same as the intermediate key and automatically written by MapReduce. The computation of counts in the map function is a minor optimization over the alternative of simply emitting a count of one for each tokenized word in the array. Figure The process of n-gram generation is similar to vocabulary generation. The main differences are that now words are converted to IDs, and we emit ngrams up to some maximum order instead of single Map(string key, string value) { // key=docid, ignored; value=document array ids = ToIds(Tokenize(value)); for i = 1 .. #ids for j = 0 .. maxorder-1 Emit(ids[i-j .. i], "1"); } Again, one may optimize the Map function by first aggregating counts over some section of the data and then emit the aggregated counts instead of emitting "1" each time an n-gram is encountered. The reduce function is the same as for vocabulary generation. The subsequent step of language model generation will calculate relative frequencies r(w i |w i-1 i-k+1 ) (see Eq. 3). In order to make that step efficient we use a sharding function that places the values needed for the numerator and denominator into the same shard. Computing a hash function on just the first words of n-grams achieves this goal. The required ngrams w i i-n+1 and w i-1 i-n+1 always share the same first word w i-n+1 , except for unigrams. For that we need to communicate the total count N to all shards. Unfortunately, sharding based on the first word only may make the shards very imbalanced. Some terms can be found at the beginning of a huge number of n-grams, e.g. stopwords, some punctuation marks, or the beginning-of-sentence marker. As an example, the shard receiving n-grams starting with the beginning-of-sentence marker tends to be several times the average size. Making the shards evenly sized is desirable because the total runtime of the process is determined by the largest shard. The shards are made more balanced by hashing based on the first two words: int ShardForKey(string key, int nshards) { string prefix = FirstTwoWords(key); return Hash(prefix) % nshards; } This requires redundantly storing unigram counts in all shards in order to be able to calculate relative frequencies within shards. That is a relatively small amount of information (a few million entries, compared to up to hundreds of billions of n-grams). The input to the language model generation step is the output of the n-gram generation step: n-grams and their counts. All information necessary to calculate relative frequencies is available within individual shards because of the sharding function. That is everything we need to generate models with Stupid Backoff. More complex smoothing methods require additional steps (see below). Backoff operations are needed when the full ngram is not found. If r(w i |w i-1 i-n+1 ) is not found, then we will successively look for r(w i |w i-1 i-n+2 ), r(w i |w i-1 i-n+3 ), etc. The language model generation step shards n-grams on their last two words (with unigrams duplicated), so all backoff operations can be done within the same shard (note that the required n-grams all share the same last word w i ). State-of-the-art techniques like Kneser-Ney Smoothing or Katz Backoff require additional, more expensive steps. At runtime, the client needs to additionally request up to 4 backoff factors for each 5-gram requested from the servers, thereby multiplying network traffic. We are not aware of a method that always stores the history backoff factors on the same shard as the longer n-gram without duplicating a large fraction of the entries. This means one needs to contact two shards per n-gram instead of just one for Stupid Backoff. Training requires additional iterations over the data. Step 0 Step 1 Step Table The most commonly used variant of Kneser-Ney smoothing is interpolated Kneser-Ney smoothing, defined recursively as where D is a discount constant and {λ(w i-1 i-n+1 )} are interpolation weights that ensure probabilities sum to one. Two additional major MapReduces are required to compute these values efficiently. Our goal is to use distributed language models integrated into the first pass of a decoder. This may yield better results than n-best list or lattice rescoring We therefore implemented a new decoder architecture. The decoder first queues some number of requests, e.g. 1,000 or 10,000 n-grams, and then sends them together to the servers, thereby exploiting the fact that network requests with large numbers of n-grams take roughly the same time to complete as requests with single n-grams. The n-best search of our machine translation decoder proceeds as follows. It maintains a graph of the search space up to some point. It then extends each hypothesis by advancing one word position in the source language, resulting in a candidate extension of the hypothesis of zero, one, or more additional target-language words (accounting for the fact that variable-length source-language fragments can correspond to variable-length target-language fragments). In a traditional setting with a local language model, the decoder immediately obtains the necessary probabilities and then (together with scores Figure The process is illustrated in Figure The alternating processes of queuing, waiting and scoring/pruning are done once per word position in a source sentence. The average sentence length in our test data is 22 words (see section 7.1), thus we have 23 rounds We focused on machine translation when describing the queued language model access. However, it is general enough that it may also be applicable to speech decoders and optical character recognition systems. We trained 5-gram language models on amounts of text varying from 13 million to 2 trillion tokens. The data is divided into four sets; language models are trained for each set separately We compiled four language model training data sets, listed in order of increasing size: The English side of Arabic-English parallel data provided by LDC 5 (237 million tokens). ldcnews: This is a concatenation of several English news data sets provided by LDC 6 (5 billion tokens). webnews: Data collected over several years, up to December 2005, from web pages containing predominantly English news articles (31 billion tokens). web: General web data, which was collected in January 2006 (2 trillion tokens). For testing we use the "NIST" part of the 2006 Arabic-English NIST MT evaluation set, which is not included in the training data listed above 7 . It consists of 1797 sentences of newswire, broadcast news and newsgroup texts with 4 reference translations each. The test set is used to calculate translation BLEU scores. The English side of the set is also used to calculate perplexities and n-gram coverage. We measure the size of language models in total number of n-grams, summed over all orders from 1 to 5. There is no frequency cutoff on the n-grams. 5 Figure The web data set has the smallest relative increase. This can be at least partially explained by the higher vocabulary cutoff. The largest language model generated contains approx. 300 billion n-grams. Table A standard measure for language model quality is perplexity. It is measured on test data T = w |T | 1 : This is the inverse of the average conditional probability of a next word; lower perplexities are better. Figure Increase in coverage depends on the training data set. Within each set, we observe an almost constant growth (correlation r 2 ≥ 0.989 for all sets) with each doubling of the training data as indicated by numbers next to the lines. The fastest growth occurs for webnews data (+0.038 for each doubling), the slowest growth for target data (+0.022/x2). We use a state-of-the-art machine translation system for translating from Arabic to English that achieved a competitive BLEU score of 0.4535 on the Arabic-English NIST subset in the 2006 NIST machine translation evaluation 8 . Beam size and re-ordering window were reduced in order to facilitate a large number of experiments. Additionally, our NIST evaluation system used a mixture of 5, 6, and 7-gram models with optimized stupid backoff factors for each order, while the learning curve presented here uses a fixed order of 5 and a single fixed backoff factor. Together, these modifications reduce the BLEU score by 1.49 BLEU points (BP) 9 at the largest training size. We then varied the amount of language model training data from 13 million to 2 trillion tokens. All other parts of the system are kept the same. Results are shown in Figure We then add a second language model using ldcnews data. The first point for ldcnews shows a large improvement of around 1.4 BP over the last point for target for both KN and SB, which is approximately twice the improvement expected from doubling the amount of data. This seems to be caused by adding a new domain and combining two models. After that, we find an improvement of 0.56-0.70 BP for each doubling of the ldcnews data. The gap between Kneser-Ney Smoothing and Stupid Backoff narrows, starting with a difference of 0.85 BP and ending with a not significant difference of 0.24 BP. Adding a third language models based on webnews data does not show a jump at the start of the curve. We see, however, steady increases of 0.39-0.51 BP per doubling. The gap between Kneser-Ney and Stupid Backoff is gone, all results with Stupid Backoff are actually better than Kneser-Ney, but the differences are not significant. We then add a fourth language model based on web data and Stupid Backoff. Generating Kneser-Ney models for these data sizes is extremely expensive and is therefore omitted. The fourth model 9 1 BP = 0.01 BLEU. We show system scores as BLEU, differences as BP. shows a small but steady increase of 0.15 BP per doubling, surpassing the best Kneser-Ney model (trained on less data) by 0.82 BP at the largest size. The amount of benefit from doubling the training size is partly determined by the domains of the data sets A distributed infrastructure has been described to train and apply large-scale language models to machine translation. Experimental results were presented showing the effect of increasing the amount of training data to up to 2 trillion tokens, resulting in a 5-gram language model size of up to 300 billion n-grams. This represents a gain of about two orders of magnitude in the amount of training data that can be handled over that reported previously in the literature (or three-to-four orders of magnitude, if one considers only single-pass decoding). The infrastructure is capable of scaling to larger amounts of training data and higher n-gram orders. The technique is made efficient by judicious batching of score requests by the decoder in a serverclient architecture. A new, simple smoothing technique well-suited to distributed computation was proposed, and shown to perform as well as more sophisticated methods as the size of the language model increases. Significantly, we found that translation quality as indicated by BLEU score continues to improve with increasing language model size, at even the largest sizes considered. This finding underscores the value of being able to train and apply very large language models, and suggests that further performance gains may be had by pursuing this direction further. | 539 | 1,575 | 539 |
Knowledge-Augmented Language Model Verification | Recent Language Models (LMs) have shown impressive capabilities in generating texts with the knowledge internalized in parameters. Yet, LMs often generate the factually incorrect responses to the given queries, since their knowledge may be inaccurate, incomplete, and outdated. To address this problem, previous works propose to augment LMs with the knowledge retrieved from an external knowledge source. However, such approaches often show suboptimal text generation performance due to two reasons: 1) the model may fail to retrieve the knowledge relevant to the given query, or 2) the model may not faithfully reflect the retrieved knowledge in the generated text. To overcome these, we propose to verify the output and the knowledge of the knowledge-augmented LMs with a separate verifier, which is a small LM that is trained to detect those two types of errors through instruction-finetuning. Then, when the verifier recognizes an error, we can rectify it by either retrieving new knowledge or generating new text. Further, we use an ensemble of the outputs from different instructions with a single verifier to enhance the reliability of the verification processes. We validate the effectiveness of the proposed verification steps on multiple question answering benchmarks, whose results show that the proposed verifier effectively identifies retrieval and generation errors, allowing LMs to provide more factually correct outputs. Our code is available at | Recent Language Models (LMs) To mitigate hallucination of LMs, recent works have proposed to augment LMs with the knowledge retrieved from external knowledge sources (e.g., Wikipedia and Wikidata) In this work, we aim to overcome these suboptimalities of knowledge-augmented LMs. In other words, our goal is to verify whether the retrieved knowledge used for augmenting LMs is related to generating the answers for the given questions and whether the generated answers include the relevant parts of the retrieved knowledge. To this end, we propose to train a small, tailorable LM that is able to verify the aforementioned two failure cases of knowledge-augmented LMs in retrieval and generation steps. More specifically, we first automatically construct the training labels by categorizing the failure of knowledge-augmented LMs into two Figure In addition, we further propose refining the output from knowledge-augmented LMs if our verifier identifies the error in either the knowledge retrieval or the knowledge reflection. Specifically, we repeat the answer generation process until the model retrieves the knowledge relevant to the given question and incorporates the correctly retrieved knowledge into the generated answer, based on the verifier outcome. Also, since detecting errors of knowledge-augmented LMs with a single instruction given to the verifier might be inaccurate, we further construct an ensemble over multiple outputs from different instructions with a sin-gle verifier. Notably, one extra advantage of our verifier is that it is a plug-and-play module that works with any public or proprietary LMs, since we only require input-output pairs of LMs for verification without any architectural changes. We refer to our proposed method as Knowledge-Augmented Language Model Verification (KALMV). We experimentally validate the effectiveness of our KALMV on two different Question Answering (QA) tasks, namely open-domain QA and knowledge graph QA. The experimental results show that our KALMV can effectively verify the failure cases of knowledge-augmented LMs in knowledge retrieval and answer generation steps, contributing to significant reduction of the hallucination. Also, further analyses demonstrate the effectiveness of our error-rectifying and ensemble strategies. Our findings and contributions are threefolds: • We point out the underexplored challenges of knowledge-augmented LMs, which are retrieval of irrelevant knowledge and unfaithful knowledge grounding. • We introduce a novel verifier that identifies whether the retrieved knowledge is relevant to the question and reflected in the answer, and further present useful strategies for rectifying incorrect answers as well as improving the effectiveness of the verifier via ensembling. • We validate our KALMV on open-domain and knowledge graph question answering tasks, demonstrating its effectiveness in verifying the errors of knowledge-augmented LMs. | Language Models Pre-trained Language Models (LMs) Knowledge-Augmented LMs Early works aim to incorporate knowledge from external knowledge sources (e.g., Wikipedia) into LMs, in order to enhance their performances on tasks that require factual knowledge, such as question answering. While such previous knowledge-augmented LMs Knowledge-Augmented Fact Checking Similar to the motivation of the aforementioned knowledgeaugmented LMs, recent works In contrast, our proposed verifier can recognize the relevance of the retrieved knowledge before incorporating it into the LMs. Second, previous works suppose that the retrieved knowledge used for factchecking is accurately reflected in the generated answer; however, LMs often ignore the given knowledge and hallucinate the answer, whereas we can detect and rectify such the grounding error. Lastly, unlike most fact-checking methods that always provide the answer with its refinement, our method can further decline to provide answers unless they are validated as correct. These differences highlight the novel contributions of our verification approach, compared against previous fact-checking methods. We now formally describe knowledge-augmented LMs, and present our method, Knowledge Augmented Language Model Verification (KALMV). We begin with the explanation of language models. Language Models In our problem setup, the goal of Language Models (LMs) is to generate a factually correct answer in response to an input query from a user, which is formally defined as follows: ŷ = LM(x), where x and ŷ are the input and output pair, each of which consists of a sequence of tokens, and LM is the language model. We assume that LMs are already trained on massive instructionfinetuning datasets, which are capable of performing diverse tasks (e.g., question answering) In order to tackle the aforementioned challenges of naive LMs, some works However, despite the enormous successes of the aforementioned knowledge-augmented LMs, there exist remaining issues that have largely underexplored. First, the knowledge retrieved to augment LMs might be irrelevant to answer the given question, since the retrieval is not always accurate in real-world scenarios. Second, even if the retrieved knowledge is useful, LMs sometimes reflect the irrelevant part of the retrieved knowledge, or might completely ignore the knowledge and generate the answer based on their incorrect knowledge. In par-ticular, as shown in Figure To overcome the challenges of existing knowledgeaugmented LMs, we propose a novel verification method that identifies not only the relevance of the retrieved knowledge to the input question but also the reflection of the knowledge in the generated answer, which we refer to as Knowledge-Augmented Language Model Verification (KALMV). Verification of Retrieved Knowledge Given the triplet of the input query, the retrieved knowledge, and the generated answer (x, k, ŷ), we aim to verify whether the retrieved knowledge k is relevant to the input query x. Since recent LMs To be specific, we prompt the verifier LM to determine the relevance based on the verification instruction i as well as the input, knowledge, and generated answer triplet (x, k, ŷ), formalized as follows: , where Verifier k denotes the LM for retrieved knowledge verification, and o k denotes its output. Note that we formulate the verification task as a multiple-choice questionanswering task, i.e., the verifier should produce either "A" for incorrect retrieval or "B" for correct. Verification of Generated Answer Our next objective is to identify whether the generated answer from LM is grounded in the retrieved knowledge. To achieve this, similar to the retrieved knowledge verification process explained in the above paragraph, we use the separate, small-size, instructionfinetuned LM for answer verification. Formally, given the input query, retrieved knowledge, and generated answer triplet (x, k, ŷ), as well as the instruction i describing the task of generated answer verification, the verifier LM produces the output token, namely "A" or "B" where "A" represents that the retrieved knowledge is not reflected in the generated answer and "B" represents the vice versa, formalized as follows: o y = Verifier y (i, x, k, ŷ). Thus far, we propose to detect the errors of knowledge-augmented LMs in knowledge retrieval and answer generation by using distinct LM-based verifiers. However, it is inefficient to perform two individual verification processes, since both verification formulations are identical. Also, the knowledge retrieval and answer generation processes are sequential, which means that verifying the generated answer is unnecessary if the retrieved knowledge is irrelevant. Therefore, we further combine two verification procedures into one by changing the task instruction accordingly with the single verification LM (Verifier). Specifically, Verifier produces one among the following three options: A. the retrieved knowledge is not helpful to answer the question; B. the generated answer is not grounded in the retrieved knowledge; C. all the other cases. Instruction-Finetuning for Verifier While recent instruction-finetuned LMs might be capable of performing the proposed verification task, it may be more beneficial to tailor the LM to the verification task through additional instruction-finetuning. To perform this, we require the following inputoutput pairs: {(x, k, y), o}, where the input consists of the given question, retrieved knowledge, and true answer, and the output is the verification label which we automatically generate. In particular, we first examine whether the retrieved knowledge includes the correct answer, y ⊆ k, as annotated in the training data, and then label it as a retrieval error when the knowledge does not include the correct answer. Similarly, if the retrieval is correct yet the generated answer ŷ from LM(x, k) does not have overlapping tokens with the retrieved knowledge k, we label it as the generation error. Finally, for all cases where the generated answer is correct, we label it as correct Ensemble Verification To identify retrieval and generation errors in knowledge-augmented LMs, we forward the instruction along with the query, knowledge, and generated answer to the verifier. However, it might be inaccurate to determine the errors only with a single instruction, since recent LMs are sensitive even to minor changes in the input prompt Knowledge-Augmented Language Models Our verification method provides a distinct advantage in contrast to existing knowledge-augmented LMs and knowledge-augmented fact-checking approaches. That is, existing approaches always provide the answers to users even if they are not reliable; however, our method can withhold the answers if errors are detected by the proposed verifier, which can enhance the reliability and trustworthiness of LM-based systems. However, instead of simply refraining from responding to user queries, it is more worthwhile to rectify errors in the knowledge retrieval and answer generation stages. Thus, we further propose simple yet effective strategies, iteratively correcting errors detected by our verifier. The retrieved knowledge from the external knowledge base might be irrelevant to answer the question due to the retrieval error, which may mislead LMs to generate an incorrect answer. To overcome this issue, we retrieve the new knowledge iteratively until our verifier confirms that the retrieved knowledge is related to answering the question, for a certain number of times (e.g., ten times). Specifically, the knowledge with the highest relevance score to the question is retrieved, while excluding any knowledge that has been used in the previous iterations. Rectifying Errors in Answer Generation Even though the retrieved knowledge is pertinent to the given question, LMs sometimes ignore the knowledge augmented to them and then generate the answer based on their inaccurate knowledge. To tackle this issue, similar to what we previously did on knowledge retrieval, we iteratively generate the answer until the answer is confirmed by the verifier, for the specific number of times. Note that, in order to generate the answer differently across different trials, we leverage the top-k sampling In this section, we describe the datasets, models, evaluation metrics, and implementation details. We provide the additional details in Appendix A. We evaluate our Knowledge-Augmented Language Model Verification (KALMV) on factual Open-Domain Question Answering (ODQA) and Knowledge Graph Question Answering (KGQA) tasks. The goal of open-domain question answering (ODQA) task is to generate answers in response to factual questions usually with the relevant knowledge retrieved from the external knowledge source. As the knowledge source, we use Wikipedia which is an open encyclopedia consisting of millions of documents. For datasets, we use Natural Questions Knowledge Graph Question Answering In addition to ODQA, we evaluate our KALMV method on knowledge graph question answering (KGQA), whose goal is to answer the questions that are answerable by the facts over knowledge graphs. For datasets, we use WebQSP We compare our KALMV against relevant baselines that augment LMs with external knowledge and have strategies to reduce hallucinations. Note that models including verification can refrain from providing answers if the verifier identifies errors. Naive Language Models This baseline uses only the LMs without incorporating external knowledge. Knowledge-Augmented LMs This baseline augments LMs with the knowledge retrieved from the external knowledge base (Wikipedia or Wikidata). Adaptive Retrieval This baseline LLM-Augmenter This baseline KALMV This is our Knowledge-Augmented Language Model Verification (KALMV) method, which not only verifies both the retrieval and generation errors with the instruction-finetuned tailored verifier, but also iteratively rectifies errors. Following the standard evaluation protocol of generative QA We use the same retriever across different models for fair comparisons. In particular, for ODQA, we use BM25 We also report the performance of our verifier with regards to F1, recall, and precision scores in Figure Please note that we also provide the case study on the three verification categories in Table Ablation & Sensitive Analyses To see how much our ensemble strategy contributes to the performance gain, and also how sensitive the components in KALMV are across different models, we perform ablation and sensitive analyses on ensemble, retrieval, verification, and generation parts. First, as shown in the first row of Table For sensitive analyses, we first change the knowledge retriever for open-domain QA from the sparse (BM25) to the dense (DPR) retriever Analyses on Generalization to Unseen Data It is worthwhile noting that our KALMV can be directly applicable to other datasets without any further training on them. To show this, we first train the verifier of KALMV on the source data (e.g., Natural Questions) and then evaluate KALMV on the target data (e.g., HotpotQA), with FLAN Base used as the LM for generation and verification. As shown in Table In this work, we proposed Knowledge-Augmented Language Model Verification (KALMV), which identifies not only the relevance of the retrieved knowledge to the input query but also the faithfulness of the reflection of knowledge in the generated answers, in order to prevent incorrect answer generations with knowledge-augmented LMs. To this end, we developed a verifier that can detect errors in both the knowledge retrieval and answer generation stages by instruction-finetuning LMs. Further, during inference, we proposed to rectify errors by re-retrieving knowledge and re-generating answers if our KALMV detects errors, and also perform an ensemble over multiple verification outputs from different instructions, to improve the efficacy of the verifier. We validated KALMV on two question answering tasks and showed its effectiveness in significantly reducing hallucinations. We believe that KALMV will bring substantial practical impact in improving the reliability of LM-based systems, especially since it is a plug-and-play module. In this section, we faithfully discuss the current limitations and potential avenues for future research. First, we propose to instruction-finetune the verifier LM to customize it to the proposed verification task that aims to detect errors in knowledge retrieval and answer generation steps. Then, through our experimental results and analyses, we show that our proposed verifier trained by the automatically generated input-output pairs (See Section 3.2) is effective in identifying errors. However, the automatic label-generation processes that we suggest are indeed simple and they may introduce the potential to incorrectly generate the verification label in some particular scenarios (e.g., multi-step reasoning with multiple sources of knowledge). Therefore, someone may improve the labels required for instruction-finetuning verifiers by annotating them manually with humans or designing more sophisticated strategies, which we leave as future work. Second, our work initiates a new problem setup of detecting errors of knowledge-augmented LMs in two different perspectives: knowledge retrieval and answer generation. However, each component and strategy of the proposed KALMV method is a bit separated. Specifically, the retriever and verifier are not jointly trained, while the signal from training the verifier may help improve the retriever's performance. Also, regarding the error rectifying steps, while we can iteratively correct failures on knowledge-augmented LMs, the previous and current rectifying steps are handled separately. However, the current step may get benefits from the results of the previous steps. We leave developing and building more ideas on improving components of our proposed KALMV method as future work. Hallucination, which is a phenomenon where the language models generate responses that are plausible and sound yet factually incorrect, is a critical problem especially when deploying LMs in production since it can induce the spreading of misinformation. In this work, the proposed knowledgeaugmented language model verification (KALMV) method contributes to significantly reducing hallucinations of LMs, by verifying their retrieved knowledge and generated answers, and further rectifying them if errors are detected. However, there may be some cases where our verifier misclassifies the failure cases of knowledge-augmented LMs as correct, potentially leading to severe negative consequences, especially in mission-critical domains and systems. Therefore, it is important for us to put more effort into making LMs more reliable and trustworthy with advanced verification methods. Here we provide additional experimental setups, including the instruction that we use for verification. Instruction Prompt In Table In our experiments in Section 5, we include this LLM-Augmenter model as our major baseline As it is worthwhile to investigate the increment of computational costs incurred by answer verification of our KALMV compared to the one without verification, we measure the relative increment in costs that our verifier additionally brings compared to the whole costs of running base knowledge-augmented LMs, and report it in Table In Table Table The following is a multiple choice question about a question answering task. In this task, you should generate an output given a question with a passage. The passage is retrieved from Wikipedia, which may or may not be helpful to answer the question. | 1,461 | 2,939 | 1,461 |
Evaluating and Improving Factuality in Multimodal Abstractive Summarization | Current metrics for evaluating factuality for abstractive document summarization have achieved high correlations with human judgment, but they do not account for the vision modality and thus are not adequate for visionand-language summarization. We propose CLIPBERTSCORE, a simple weighted combination of CLIPScore (Hessel et al., 2021) and BERTScore (Zhang* et al., 2020) to leverage the robustness and strong factuality detection performance between image-summary and document-summary, respectively. Next, due to the lack of meta-evaluation benchmarks to evaluate the quality of multimodal factuality metrics, we collect human judgments of factuality with respect to documents and images. We show that this simple combination of two metrics in the zero-shot setting achieves higher correlations than existing factuality metrics for document summarization, outperforms an existing multimodal summarization metric, and performs competitively with strong multimodal factuality metrics specifically fine-tuned for the task. Our thorough analysis demonstrates the robustness and high correlation of CLIP-BERTSCORE and its components on four factuality metric-evaluation benchmarks. Finally, we demonstrate two practical downstream applications of our CLIPBERTSCORE metric: for selecting important images to focus on during training, and as a reward for reinforcement learning to improve factuality of multimodal summary generation w.r.t automatic and human evaluation. 1 | Multimodal abstractive summarization is the task of generating an abridged text that contains the most important information of the source inputs from various modalities. This challenging task builds upon the success of document summarization, where the input is only text documents. For document summarization, there has been tremendous progress in improving the quality of the summaries with the help of large pre-trained models While there have been significant advancements in developing metrics that correlate highly with the human judgment of factuality In this work, we introduce a metric that judges factuality of the summary with respect to each input modality. Focusing on the vision-and-language summarization, we propose CLIPBERTSCORE, a simple and robust automatic factuality evaluation metric for multimodal summaries that combines two successful metrics: CLIPScore Next, due to the lack of corpora containing ground-truth human factuality judgments to eval-uate multimodal factuality metrics via correlation with human evaluation, we propose a Multimodal Factuality Meta-Evaluation (MUFAME) benchmark by collecting human annotation for four summarization systems and the reference summary on WikiHow, Next, we perform a detailed analysis of CLIP-BERTSCORE by evaluating the correlation of the metric and each of its modules on four additional factuality metric-evaluation benchmarks. We first propose the WikiHow Factuality (WikiHowFact) task, derived from the Visual Goal-Step Inference task Lastly, we present two practical applications for improving the factuality of downstream multimodal summarization models using CLIP-BERTSCORE: (1) Selecting the most important images as visual guidance To summarize, our contributions are: 1. We propose a simple and robust factuality metric for multimodal summarization based on a combination of CLIPScore and BERTScore. | Cut just the tip of the nails. Be sure you know where the quick is before you attempt to cut the nail … You should first cut just the tip of the nails ... 2. We create MUFAME, a meta-evaluation for factuality of multimodal summarization, and the WikiHowFact task to evaluate the quality of multimodal factuality metrics. 3. We present a detailed study of our metric and its components on various factuality metricevaluation benchmarks and present strong empirical evidence of its robustness. 4. We demonstrate two useful downstream applications of our metric to improve the factuality of multimodal abstractive summarization models. CLIPBERTSCORE consists of two parts that tackle the image-summary and document-summary factuality judgments, respectively. We show an illustration of the computation in Figure Image-Summary. We use a variant of CLIP-Score Thus, it serves as a fitting candidate for factuality evaluation between the image and the summary. We use CLIP-S, which calculates the cosine similarity between the image embedding v and the text embedding of the summary sentence t. To adapt to multimodal summarization, where we have multiple images and multi-sentence summaries, Document-Summary. To better detect hallucinations present in the summary with respect to the document, we use the precision variant of BERTScore Full Metric. The final score is a combination of the factuality score for image-summary with CLIP-S and that for document-summary with BERT-S: CLIPBERTSCORE = αCLIP-S+(1-α)BERT-S, where α is a tunable parameter. Please see Section 3.4 for other ways to learn this combination. Next, after defining the multimodal factuality metric CLIPBERTSCORE, we want to evaluate the quality of this new metric by checking whether it correlates with human judgments, similar to what has been done for textual factuality metrics Dataset. We construct an English multimodal WikiHow summarization dataset Annotations. We conduct the annotations on Amazon Mechanical Turk The workers then need to choose whether each summary is faithful to the document and the image separately. An example of the annotation page can be seen in Appendix A.3. For high-quality annotations, we first conduct a qualification test, where we compare the annotations from the workers against annotations by the authors. Only the workers who have the same annotations on the selected example can perform the actual annotation task. We further select workers from the United States, who have more than 10,000 HITs approved and an approval rate greater than 98%. We pay 0.18 USD per task to ensure a > $12 hourly rate. Each task consists of three unique workers, and we take the majority class for the document and image factuality judgments, similar to We consider the summary to be faithful only if it is considered faithful to both document and image. We also experiment beyond binary judgment by taking the average over the two factuality judgment to indicate a summary may be partially faithful to one of the source, which is shown in Appendix B. Inter-Annotator Agreement. We report Fleiss Kappa κ 3.2 Experimental Setup CLIPBERTSCORE. For CLIP-S, we use the RN50x64 visual backbone instead of the ViT-B/32 version used in the original metric, as the larger backbone shows a higher correlation on factuality benchmarks. For BERT-S, we choose RoBERTa-large-mnli to compute the contextualized embeddings instead of RoBERTa-large for the same reason. We refer readers to Section 4 for more details. We use the validation set of MUFAME to tune α, where we find that α = 0.25 achieves the best correlations on the combined judgment. We use this parameter for all experiments (See Section 3.4 for other ways to learn this combination). For image-summary evaluation, we compare our CLIP-S against Triplet Network, as described in For multimodal factuality metrics, we experiment with several weighted combinations of documentsummary and image-summary metrics by tuning the weights on the validation set, including combinations of DAE with CLIP-S, Triplet Network with BERT-S, and RefCLIP-S. We also compare to MMAE Table Next, for the document-summary factuality judgments, BERT-S achieves the highest correlation, outperforming DAE by 8 points and the original BERTScore by 4 points. Compared to MMAE, which is developed for evaluating the quality of multimodal summarization, CLIPBERTSCORE significantly outperforms on all three categories, showing the importance of targeting the factuality aspect. While Triplet-Net achieves better correlations on image, CLIPBERTSCORE actually outperforms the fine-tuned variants for the document case and provides the same correlations on the combined case. We thus stress the simplicity of CLIPBERTSCORE of only requiring the use of two off-the-shelf metrics in the zero-shot setting without the need for extra training to compare competitively with fine-tuned method. While CLIPBERTSCORE uses α to decide the weights for CLIP-S and BERT-S, we also explore using logistic regression (logis) and multi-layer perceptron (MLP) to output a final score given the two modules, following Get your hamster plenty of toys. Isolate any sick quail. Isolate birds that already have the disease. Remove toys, blankets, beds, and other objects from the crate. would achieve the highest Pearson correlation on the development set of MUFAME meta-evaluation dataset. We evaluate CLIPBERTSCORE and its components on additional factuality metric-evaluation benchmarks, focusing on how robust the metrics performs across a variety of tasks and domains. We propose the WikiHow Factuality (WikiHow-Fact) task that evaluates how well the metric can choose the correct summaries over incorrect ones. We derive this task from WikiHow VGSI Results. We present the WikiHowFact result in Table The FOIL BISON We compare how BERT-S and CLIPText-S correlate on FRANK, a factuality benchmark evaluation for document abstractive summarization containing 2,250 annotations for generated summaries on XSum Finally, we present two useful downstream applications for improving factuality of multimodal summarization models: first by using the metric as a reference image selection to guide the model in attending important images, and second by using it as a reward for self-critical sequence training. For both applications, we train strong baseline models by adapting CLIP-BART 10 One of the well-known tasks is multimodal summarization with multimodal output We compare against the model using ROUGE as the visual guidance. Following A more generalized application to improve factuality is to use CLIPBERTSCORE as a reward for the self-critical sequence training We use CLIPBERTSCORE and ROUGE-2 as the rewards, so as to improve factually while maintaining informativeness. Following The result is shown in Table To evaluate the factuality of the summaries generated by models trained with SCST against that by the base model, we conduct a human evaluation on a randomly sampled 100 articles from the MMSS test set. We perform the same AMT experiment as described in Section 3.1. We ensure the same > $12 hourly rate and pay 0.1 USD per HIT. For each summary, we aggregate the 3 annotator scores for the document, image, and combined judgments. The final factuality score is the average across the 100 examples. The result is shown in Table Multimodal Summarization. The task of multimodal summarization takes additional inputs from multiple modalities apart from the input text document, including images Faithfulness and Factuality Metrics. Many metrics have been proposed to evaluate the factuality of generated summaries. The metrics can be roughly categorized into entailment-based and question generation and question answering (QGQA) metrics. Entailment-based metrics In this work, we present CLIPBERTSCORE, an automatic metric for evaluating factuality for multimodal abstractive summarization. Through metaevaluation with MUFAME and additional factuality benchmarks, we show CLIPBERTSCORE and its modules correlate well with the human judgment of factuality with respect to the document, image and combined. CLIPBERTSCORE is robust across the different image and text domains and achieves competitive correlation in the zero-shot setting with more complex metrics. We hope this work provides a meta-evaluation for evaluating future multimodal factuality metrics with MUFAME, a strong baseline metric CLIPBERTSCORE to compare against, and two methods to improve the factuality of multimodal abstractive summarization models. We limit our work to the task that only contains the vision modality through images and the text modality. However, we note that multimodal summarization also contains video and audio, which we leave for future works. Furthermore, similar to all pretraining models, CLIPScore and BERTScore are also known for reflecting biases of the pre-training data PEGASUS. PEGASUS CLIP-BART. The architecture of CLIP-BART is described in Section 5. The total number of parameters is around 140 million. We fine-tune the model starting from the BART-base checkpoint, and use the CLIP RN50x64 visual encoder to extract image features. We use mixed precision, and the training was performed on a single NVIDIA RTX A6000 GPU for approximately 6 hours. Figure We also experiment with combining the two judgments in a continuous way, by taking the average of the two judgments so that a score of 0.5 indicates that the summary is faithful to only one modality. The combined judgment is shown in Table C.1 WikiHowFact Details The three negative images are selected with three different sampling strategies, following | 1,470 | 1,878 | 1,470 |
xPQA: Cross-Lingual Product Question Answering across 12 Languages | Product Question Answering (PQA) systems are key in e-commerce applications to provide responses to customers' questions as they shop for products. While existing work on PQA focuses mainly on English, in practice there is need to support multiple customer languages while leveraging product information available in English. To study this practical industrial task, we present xPQA, a large-scale annotated cross-lingual PQA dataset in 12 languages across 9 branches, and report results in (1) candidate ranking, to select the best English candidate containing the information to answer a non-English question; and | Product question answering (PQA) is a key technology in e-commerce applications. Given a question about a product, a PQA system searches the product webpage and provides an instant answer, so that customers do not need to traverse the page by themselves or seek help from humans | Figure 2017; To address this, we present xPQA, the first largescale dataset for cross-lingual PQA enabling non-English questions to be answered from English content. Most comprehensive product information is usually available in a majority language such as English. Therefore, searching for relevant information in English often has a better chance of finding an answer. Most existing multilingual QA datasets are created by translating English questions, introduc-ing translation artifacts and discrepencies from native speakers' real information-seeking behaviors Based on the collected dataset, we report baseline results on two subtasks: (a) candidate ranking, which selects the best English candidate that contains the information to answer the non-English question; (b) answer generation, which generates a natural-sounding non-English answer to present to the user based on the selected English candidate. We find that applying a cross-lingual ranker trained on a Wikipedia-based QA dataset generalizes poorly to the product domain. The performance is even worse than training a multilingual ranker on the English in-domain data, suggesting that domain transferability is even more crucial than language transferability. The translation-based approach is the most effective for candidate ranking while the multilingual-finetuning works the best for answer generation. Nonetheless, on both tasks, there is a substantial gap between the English-based and cross-lingual performances. In the following, we first elaborate on the problem formulation for the cross-lingual PQA task ( §2), then explain the xPQA data collection process ( §3), and present experiment results ( §5.2) and conclusions ( §6). Task There are two important tasks for a crosslingual PQA system: candidate ranking and answer generation. In candidate ranking, given a question in a target language and a list of candidates in English, the ranker predicts a relevance score for every candidate and selects the top one. Candidate ranking is necessary because a given product webpage may contain hundreds of information pieces about the product, so as a practical matter we select the top candidate to use in generation. After getting the top candidate, an answer generator takes it as input together with the question and produces an answer in the question language. This step is crucial in order to deploy a user-friendly PQA system since the candidate is neither in the user language nor written specifically to answer the question. Scenario We consider two scenarios for both tasks: zero-shot and fine-tuned. Zero-shot assumes that we do not have any labeled data and must rely on transfer learning from the English-based PQA dataset In our experiments, we use ePQA as the Englishbased PQA dataset, which is an extension of the dataset in To train and evaluate our two tasks, the xPQA dataset contains annotations for (1) questioncandidate relevance to label whether every candidate is relevant to the question or not, and (2) answers where a natural-sounding answer is manually written if the candidate contains enough information to address the question. The collection process follows the steps below: 1. Question Collection For our question set, we crawl publicly-available community questions from Amazon.com product pages in 11 markets, obtaining questions in 12 different languages. For each language, we choose the corresponding market, then sample 2,500 unique questions. From these sampled questions, we select 1500 questions for each language that are manually verified by our annotators as being in the target language, information seeking, and containing no offensive content. For every valid question, we link its corresponding product page in the US market (except for Hindi and Tamil which directly use the India market) and extract all English candidates from product information sources (details in Appendix B.2). Then, we translate every question into English with AWS translate, The top-5 English candidates and the non-English original questions are passed to annotators to judge their relevance. Each candidate is marked with one of three labels: "fully answering" (contains enough information to address the question), "partially answering" (contains useful information to partially address the question), and "irrelevant" (does not provide any helpful information). Guidelines are available in Appendix B.3. To increase the answer coverage, questions for which none of the top-5 candidates are marked as "fully answering" are given to annotators who are asked to actively search for the answer on the Amazon product page. If they find candidates fully answering the question, these are included with the label "fully answering". For candidates marked as "fully answering", annotators are then asked to write natural, direct answers based on them. All annotators are bilingual, hired through the centific platform For each task, we experiment with three types of baseline approaches: translate-test, translatetrain, and multilingual The essential idea here is to rely exclusively on English-centric models and datasets. In the zero-shot scenario, models are trained on the ePQA dataset. In the fine-tuned scenario, we must translate questions and answers in the xPQA dataset into English as this is an English-centric model. This translated dataset, termed xPQA_MT is used to further fine-tune the zero-shot models. At runtime, we use an external machine translation model to translate the question into English and apply the ranker to select the best candidate. Afterwards, an English-based generator produces an answer in English, which is then post-translated to the target language. Translate-test is a common approach in industry as it uses well-trained English-based models and off-the-shelf translation tools without further modifications. However, such a pipelined process introduces runtime latency and can lead to error propagation if translation quality is not perfect. Translate-train In contrast to the above, here we apply all translation processes in training, or offline, so that no additional latency is added at runtime. In the zero-shot scenario, we machine-translate all questions and answers in the ePQA dataset into each of the 12 languages we consider. The resulting dataset, termed ePQA_MT, is used to train a multilingual model. In the fine-tuned scenario, we further finetune the model on the xPQA dataset. As the model is defined to be multilingual, it can directly take input questions in their original languages and output answers in the target languages without any translation process. Multilingual Finally, this approach is similar to the translate-train one in that both use multilingual models rather than an English-only model, but the difference is that the multilingual approach requires no translations at training time. In the zeroshot scenario, it trains a multilingual pretrained model directly on the English-only ePQA dataset and relies only on its own pretrained multilingual knowledge to adapt to other languages. In the finetuned scenario, we further fine-tune the model on the xPQA dataset. Note that this approach still requires runtime post-translation of the generated English answer into the target language. This is because we find that multilingual models can only generate English answers when trained only on English datasets. Although special decoding constraints could be use to restrict output vocabulary to that of the target language, zero-shot multilingual adaptation in generation tasks is still an open challenge It is worth mentioning that the three types of approaches can be combined. For example, we could follow the translate-train approach to train the candidate ranker and follow the multilingual approach to train the answer generator. Details of the model implementation are in Appendix C. Although many QA works report end-to-end performances, we chose not to report them because (1) Most product questions, as well as the information sources such as reviews and customer answers, are subjective. The correctness of answers depends on the specific candidates for which there is no universal ground truth (McAuley and Yang, 2016); (2) Only providing answers grounded on references is a critical requirement for an online PQA deployment. When candidate ranking fails to provide suitable candidates in the first stage, even if the answer generator manages to make a good guess, Task 1: Candidate Ranking Table Task 2: Answer Generation Table There are cross-lingual QA datasets in other domains. When building a system for xPQA, is it better to use an English-only in-domain QA dataset or a crosslingual out-of-domain QA dataset? To answer this question, we train a new multilingual ranker on the XOR-TyDi dataset Comparisons to machine-translated questions are shown in Table Runtime Latency Table This paper presents xPQA, a dataset for crosslingual PQA supporting non-English questions to be answered from English product content. We report baseline results and findings for three approaches: translate-test, multilingual, and translatetrain. Experiments show that the translate-test approach performs the best for the candidate ranking task while the translate-train approach performs the best for the answer generation task. However, there remains significant room for improvement relative to an English-based monolingual PQA system. We hope that future research can benefit from our work to improve cross-lingual PQA systems. While the xPQA dataset is created to be as close to the real-world scenario as possible, it has two major drawbacks. Firstly, the candidate set in the dataset does not include the full candidates for a given product because annotating all candidates is prohibitively expensive. The subjectivity of product questions and candidates also makes it hard to get ground-truth short-span answers, which prevents a straightforward end-to-end evaluation over the full candidate set. A potential fix is to run human evaluations on the top-1 candidate over the full candidate set from each model, but it'd be costly to do so. A more realistic solution is to have an online evaluation for the best model only, which we leave for future work. Secondly, the answer annotation is based only on a single candidate because handling information from multiple candidates requires careful instructions on conflicting information and summarization skills. This might limit the model in answering complex questions that require inference over multiple candidates. However, we find this case to be very rare in real customer questions. Furthermore, as we do not summarize multiple candidates, the returned answer can be biased toward the opinion of a single customer. Our evaluation also has potential limitations in that (1) We did not extensively evaluate the quality of generated answers with manual annotation. It is known that BLEU scores might not correlate well with human evaluations on generation tasks, and they can be misleading in certain cases; (2) We only compared major types of baseline algorithms and did not explore the effects of leveraging existing larger, more powerful pre-trained language models such as mT0 E-commerce has been increasingly popular these years. Nonetheless, a big amount of people cannot benefit much from it because most E-commerce websites only support a few major languages. Deploying an xPQA system can have a broad impact across a wide range of non-English speakers to assist them in their shopping experience. With a well-developed xPQA system, we only need to maintain comprehensive product information in one majority language, but allow non-English speakers easily get access to the product information. This can significantly reduce the maintenance cost and benefit the democratization of AI. Nevertheless, there are two major caveats before deploying a safe, reliable xPQA system: (1) The answer generator needs to be fairly evaluated by humans in terms of faithfulness. While answer generation can greatly improve user-friendliness, it also brings potential risks of providing false information; (2) The users should be well noticed that the provided answer is drawn from the opinion of a single customer or other sources. It cannot reflect the opinion of the vendor, or seller nor imply any trend from the public. Product Question Answering Product question answering (PQA) differs from general-knowledge QAs in that questions often seek subjective opinions on specific products, so earlier research usually treated it as an opinion mining problem (2) It does not restrict the product categories, while the original dataset focuses only on the toys and games products; (3) It defines finer-grained 3-class labels for each candidate, while the original dataset contains only binary labels; (4) Every candidate is checked with its context (surrounding sentences) to make sure the label is correct. To the best of our knowledge, all existing PQA datasets are monolingual and questions are usually in high-resource languages such as English or Chinese, which leads to our motivation of building a cross-lingual PQA dataset. Cross-Lingual Question Answering Recently, many non-English question answering (QA) datasets in the general Wikipedia domain have been proposed Notably, ePQA contains 131,52/1,000/2,000 questions in the train/dev/test sets respectively, which is significantly larger than xPQA (as in realistic scenarios). It can be used to analyze the performance gap between mono-lingual PQA and cross-lingual PQA. In the question collection phase, questions are kept if they fulfill the following criteria: (1) It is identified as the target language through Amazon Comprehend Our candidates come from 6 information sources: (1) product title, (2) semi-structured attributes, (3) product bullet points, (4) product description, (5) community answers (excluding the answer that directly replies to the question); (6) user reviews. Every product title and attribute is treated as a single candidate. For the other product information, we split them into sentences and treat each sentence as the candidate. For candidates from community answers, We further concatenate them with the corresponding community questions to provide more context. All candidates are lower cases and emojis are removed. Numbers from the semi-structured attributes are further normalized to keep at most 2 decimals. Each candidate is marked with one of three labels: "fully answering" (it contains enough information to address the question), "partially answering" (it contains useful information to partially address the question), and "irrelevant" (it's not useful in answering the question at all). To make sure the candidate is properly understood, we also provide its context (surrounding sentences) to the annotators. The exact definitions for the three labels and guidelines used are: • Fully answering. Meaning that the response contains clear information to tell about the answer. It can take some inference step to get the answer, but it must contain enough information to help come to the answer. • Partially answering (relevant but not fully answering). Meaning that the response contains useful information that help one understand more, and narrow down the range of the answer, yet not enough to get the exact answer from it. • Irrelevant. Meaning that the response does not provide useful relevant information at all, and a customer will not get anything new about their question after reading it. Note that in this step, annotators do NOT need to consider factual correctness. For the question "what color is it?", it does not matter if the response is saying it is blue or red. Annotators should focus on the content only but not the factual correctness. Besides, even if it contains other extra information or the response is not natural, as long as the proper information is included, then it is considered as fully answering. Specifically, Fully answering means the response contains enough information to let one draw the answer. The criteria of fully answering should NOT be overly strict. Annotators can be lenient with the word choice, as long as the response conveys the proper meaning. For example: Question: is it an awesome gift for my girl friend? Response: it is a nice valentine gift for your partner. In this case, the difference between "awesome" and "nice" is not relevant, as the response is either way saying that it is a good gift for your girl friend or partner, and thereby should be judged as "fully answering". Another example: Question: is it comfortable to sleep on for a 6" tall man? Response: It is comfortable to lie down for tall people. Annotators should not be overly strict about whether 6" can be considered as "tall" and whether "lie down" is equivalent to "sleep on", etc. Based on common sense, if the immediate impression after reading the response provides the needed information, one should NOT overthink other ways of interpreting this response. Helpful but not fully answering means the response contains helpful information, but is not enough to answer the question, or it can fully answers the question but the information is uncertain. "Helpful" means it provides useful information to help you know more about the question or narrow down the scope of the answer. For example: -question: Is it good for my 3year-old kid? -response: my 5-year-old son likes it. It cannot fully tell whether a 3-year-old will like it, but knowing that a 5-year-old likes it is helpful information. It helps you narrow down the range of the answer -You know it is for kids but not adults, just not sure if it works exactly for 3-year-old. "irrelevant" means the response provides zero useful information about the question, and is totally useless. Imagine you are a customer that raises this question, you should only select this option when you cannot find any useful information from the response. During the answer annotation, annotators are instructed to provide a natural, informative, and complete sentence to directly answer the user questions given the provided information in the response. The provided answer is required to be: • natural. It should be a fluent, natural-sounding sentence. • informative. It should provide key information or explanations for users to better under-stand the question. It cannot be a single word like "Yes" or "No" without further content. • complete. It should be a complete sentence that provides more context instead of a short span. There is also a caveat to avoid copying the candidate exactly. Annotators should always extract useful information from it and show the reasoning step in the answer to make it a natural reply. If the candidate is from a customer-provided content, they are further instructed to write from a thirdparty viewpoint. For user-provided contents, the answer will be in the form of "A customer says he feels ..." instead of "I feel ...". Annotations are done through the centific platform | 615 | 278 | 615 |
A Case Study of Analysis of Construals in Language on Social Media Surrounding a Crisis Event | The events that took place at the Unite the Right rally held in Charlottesville, Virginia on August 11-12, 2017 caused intense reaction on social media from users across the political spectrum. We present a novel application of psycholinguistics -specifically, construal level theory -to analyze the language on social media around this event of social import through topic models. We find that including psycholinguistic measures of concreteness as covariates in topic models can lead to informed analysis of the language surrounding an event of political import. | Construal Level theory (CLT) To illustrate, consider the example of climate change. Research has shown that when people are primed to think about the topic of climate change using more concrete terms such as beetle and forest vs. more abstract terms (sea levels), they are more likely to engage with the topic of climate change Construals can differ based on geographical, social and temporal distance. An event which is distant in the future would be described in language that has higher levels of abstractness (and therefore low concreteness) than an event which is more proximal. Given that (a) language use reflects differing levels of construals and (b) construals can differ for events that are temporally distant vs. temporally proximal, we seek to investigate whether individuals on social media would discuss an event using different levels of construals and whether we can determine the effects of these construals from their language use. We thus use Construal Level Theory as a theoretical foundation to understand the reaction of individuals on Twitter related to the Unite the Right rally that took place in Charlottesville, Virginia on August 11-12, 2017. We apply topic models to analyze language use and study how users view the events that took place during the protests. To demonstrate, consider the tweets shown in Table Our work, situated at the intersection of psycholinguistics and computational social science, makes the following salient contributions: • We extend the application of Construal Level Theory beyond laboratory settings to make it more ecologically valid; • To analyze language produced spontaneously on social media, we use topic modeling and include concreteness values as covariates in the topic models. | Construal Level Theory to Study Human Behavior: Construal level theory, first introduced by (2014) conducted a study regarding how psychological distance of thought would impact the positivity of reactions. They showed how distance from a scenario (having it happen to oneself versus to someone else) impacts one's reaction to it. Topic Models to Study Language Data: Topic modeling techniques, based on probabilistic latent semantic analysis A major challenge while studying social media data is representativeness and sample selection bias We use R and the STM We then used an existing concreteness lexi-con After constructing a topic model, the patterns noticed among the topics and among the words that were most common in each topic can be used to explain the construal levels of the users.It is important to note that some of the topics produced, specifically Topic 2, 7, and 10 contained foul language, reflecting the harsh and opinionated nature of the tweets made regarding this event. We summarize our two main findings in this paper, while more indepth analysis and contextualization within a larger research project is the main focus of an upcoming, larger publication. Figure As discussed above, concrete terms refer to specific tangible objects, while abstract terms can be general ideas or emotions. Topics 3 and 9 stand out as the least and most concrete, resp. Other topics with high concreteness terms in the tweets are Topic 1, 6 and 10. Most topics are characterized by low concreteness values This suggests that the abstract construals are likely to appear both before and after the event but not during. This finding is consistent with prior research applying Construal Level Theory in lab settings. The protests that took place in Charlottesville in August of 2017 caused an outsize reaction on social media. We investigate how individuals perceive an event during its occurrence and after it ends, through the lens of Construal Level Theory. Our main finding is that adding concreteness values as covariates during topic modeling can help distinguish which topics were prevalent before, during and after the event. We find that during the ongoing discussion surrounding the protests (time period of Feb through Oct 2017 in our corpus), it was more likely that abstract terms that refer to ideas and emotions were used. Notably, we found that language using more concrete terms was used to describe the events after they occurred. This finding is not surprising -it is easier to discuss an event in concrete terms after it occurs, because individuals will have specific objects (like car and torch) to refer to, in addition to proper nouns like specific names or places. However, a significant dip in the expected topic proportion after the event (c.f. Figure Limitations: We acknowledge several limitations of our work: • Single Event: Our analysis is focused on a single event: the Charlottesville protest rally. As such, we cannot yet claim generalizability of our findings. We offer our research as a first foray into a series of analyses focusing on construals across varying events and contexts. For example, one direction for future work is suggested in analysis of construals about the COVID-19 pandemic at different stages of an ongoing, global event. • Deeper Analysis of Concrete Terms: In this work, we do not present an in-depth study for the concrete vs. abstract words associated with each topic. Certainly, interesting questions to ask would be whether the frex terms (highest ranking frequent and exclusive words) or the highest probability words in each topic are correlated in any way with the concreteness values. We address this limitation as part of our future work. • Language Limitations: Our study is focused on an event that occurred in the United States. As such, all of our data are in English. As part of addressing the question of generalizability of findings, we further aim to replicate our findings in multiple languages given appropriate data. Concreteness lexicons now exist in multiple languages, including Dutch | 564 | 1,746 | 564 |
Which Side are You on? Identifying Perspectives at the Document and Sentence Levels | In this paper we investigate a new problem of identifying the perspective from which a document is written. By perspective we mean a point of view, for example, from the perspective of Democrats or Republicans. Can computers learn to identify the perspective of a document? Not every sentence is written strongly from a perspective. Can computers learn to identify which sentences strongly convey a particular perspective? We develop statistical models to capture how perspectives are expressed at the document and sentence levels, and evaluate the proposed models on articles about the Israeli-Palestinian conflict. The results show that the proposed models successfully learn how perspectives are reflected in word usage and can identify the perspective of a document with high accuracy. | In this paper we investigate a new problem of automatically identifying the perspective from which a document is written. By perspective we mean a "subjective evaluation of relative significance, a point-of-view." (1) The inadvertent killing by Israeli forces of Palestinian civilians -usually in the course of shooting at Palestinian terrorists -is considered no different at the moral and ethical level than the deliberate targeting of Israeli civilians by Palestinian suicide bombers. (2) In the first weeks of the Intifada, for example, Palestinian public protests and civilian demonstrations were answered brutally by Israel, which killed tens of unarmed protesters. Example 1 is written from an Israeli perspective; Example 2 is written from a Palestinian perspective. Anyone knowledgeable about the issues of the Israeli-Palestinian conflict can easily identify the perspectives from which the above examples were written. However, can computers learn to identify the perspective of a document given a training corpus? When an issue is discussed from different perspectives, not every sentence strongly reflects the perspective of the author. For example, the following sentences were written by a Palestinian and an Israeli. (3) The Rhodes agreements of 1949 set them as the ceasefire lines between Israel and the Arab states. (4) The green line was drawn up at the Rhodes Armistice talks in 1948-49. Examples 3 and 4 both factually introduce the background of the issue of the "green line" without expressing explicit perspectives. Can we develop a system to automatically discriminate between sentences that strongly indicate a perspective and sentences that only reflect shared background information? A system that can automatically identify the perspective from which a document is written will be a valuable tool for people analyzing huge collections of documents from different perspectives. Political analysts regularly monitor the positions that countries take on international and domestic issues. Media analysts frequently survey broadcast news, newspapers, and weblogs for differing viewpoints. Without the assistance of computers, analysts have no choice but to read each document in order to identify those from a perspective of interest, which is extremely time-consuming. What these analysts need is to find strong statements from different perspectives and to ignore statements that reflect little or no perspective. In this paper we approach the problem of learning individual perspectives in a statistical framework. We develop statistical models to learn how perspectives are reflected in word usage, and we treat the problem of identifying perspectives as a classification task. Although our corpus contains documentlevel perspective annotations, it lacks sentence-level annotations, creating a challenge for learning the perspective of sentences. We propose a novel statistical model to overcome this problem. The experimental results show that the proposed statistical models can successfully identify the perspective from which a document is written with high accuracy. | Identifying the perspective from which a document is written is a subtask in the growing area of automatic opinion recognition and extraction. Subjective language is used to express opinions, emotions, and sentiments. So far, research in automatic opinion recognition has primarily addressed learning subjective language Research on the automatic classification of movie or product reviews as positive or negative (e.g., There has been research in discourse analysis that examines how different perspectives are expressed in political discourse Our corpus consists of articles published on the bitterlemons website 2 . The website is set up to "contribute to mutual understanding [between Palestinians and Israelis] through the open exchange of ideas." 3 Every week an issue about the Israeli-Palestinian conflict is selected for discussion (e.g., "Disengagement: unilateral or coordinated?"), and a Palestinian editor and an Israeli editor each contribute one article addressing the issue. In addition, the Israeli and Palestinian editors invite one Israeli and one Palestinian to express their views on the issue (sometimes in the form of an interview), resulting in a total of four articles in a weekly edition. We choose the bitterlemons website for two reasons. First, each article is already labeled as either Palestinian or Israeli by the editors, allowing us to exploit existing annotations. Second, the bitterlemons corpus enables us to test the generalizability of the proposed models in a very realistic setting: training on articles written by a small number of writers (two editors) and testing on articles from a much larger group of writers (more than 200 different guests). We collected a total of 594 articles published on the website from late 2001 to early 2005. The distribution of documents and sentences are listed in Table We develop algorithms for learning perspectives using a statistical framework. Denote a training corpus as a set of documents W n and their perspectives labels D n , n = 1, . . . , N , where N is the total number of documents in the corpus. Given a new document W with a unknown document perspective, the perspective D is calculated based on the following conditional probability. We are also interested in how strongly each sentence in a document conveys perspective information. Denote the intensity of the m-th sentence of the n-th document as a binary random variable S m,n . To evaluate S m,n , how strongly a sentence reflects a particular perspective, we calculate the following conditional probability. We model the process of generating documents from a particular perspective as follows: First, the parameters π and θ are sampled once from prior distributions for the whole corpus. Beta and Dirichlet are chosen because they are conjugate priors for binomial and multinomial distributions, respectively. We set the hyperparameters α π , β π , and α θ to one, resulting in non-informative priors. A document perspective D n is then sampled from a binomial distribution with the parameter π. The value of D n is either d 0 (Israeli) or d 1 (Palestinian). Words in the document are then sampled from a multinomial distribution, where L n is the length of the document. A graphical representation of the model is shown in Figure The model described above is commonly known as a naïve Bayes (NB) model. NB models have been widely used for various classification tasks, including text categorization To predict the perspective of an unseen document using naïve Bayes , we calculate the posterior distribution of D in (5) by integrating out the parameters, However, the above integral is difficult to compute. As an alternative, we use Markov Chain Monte Carlo (MCMC) methods to obtain samples from the posterior distribution. Details about MCMC methods can be found in Appendix A. We introduce a new binary random variable, S, to model how strongly a perspective is reflected at the sentence level. The value of S is either s 1 or s 0 , where s 1 indicates a sentence is written strongly from a perspective while s 0 indicates it is not. The whole generative process is modeled as follows: The parameters π and θ have the same semantics as in the naïve Bayes model. S is naturally modeled as a binomial variable, where τ is the parameter of S. S represents how likely it is that a sentence strongly conveys a perspective. We call this model the Latent Sentence Perspective Model (LSPM) because S is not directly observed. The graphical model representation of LSPM is shown in Figure As before, we resort to MCMC methods to sample from the posterior distributions, given in Equations ( As is often encountered in mixture models, there is an identifiability issue in LSPM. Because the values of S can be permuted without changing the likelihood function, the meanings of s 0 and s 1 are ambiguous. In Figure We solve the identifiability problem by forcing θ d 1 ,s 0 and θ d 0 ,s 0 to be identical and reducing the number of θ parameters to three. As shown in Figure We evaluate three different models for the task of identifying perspective at the document level: two naïve Bayes models (NB) with different inference methods and Support Vector Machines (SVM) To evaluate the statistical models, we train them on the documents in the bitterlemons corpus and calculate how accurately each model predicts document perspective in ten-fold cross-validation experiments. Table Training The results on the mismatched training and testing experiments are shown in Table We list the most frequent words (excluding stopwords) learned by the the NB-M model in Table 4. The frequent words overlap greatly between the Palestinian and Israeli perspectives, in-cluding "state," "peace," "process," "secure" ("security"), and "govern" ("government"). This is in contrast to what we expect from topical text classification (e.g., "Sports" vs. "Politics"), in which frequent words seldom overlap. Authors from different perspectives often choose words from a similar vocabulary but emphasize them differently. For example, in documents that are written from the Palestinian perspective, the word "palestinian" is mentioned more frequently than the word "israel." It is, however, the reverse for documents that are written from the Israeli perspective. Perspectives are also expressed in how frequently certain people ("sharon" v.s. "arafat"), countries ("international" v.s. "america"), and actions ("occupation" v.s. "settle") are mentioned. While one might solicit these contrasting word pairs from domain experts, our results show that statistical models such as SVM and naïve Bayes can automatically acquire them. In addition to identifying the perspective of a document, we are interested in knowing which sentences of the document strongly conveys perspective information. Sentence-level perspective annotations do not exist in the bitterlemons corpus, which makes estimating parameters for the proposed Latent Sentence Perspective Model (LSPM) difficult. The posterior probability that a sentence strongly covey a perspective (Example ( The experimental results are shown in Table In this paper we study a new problem of learning to identify the perspective from which a text is written at the document and sentence levels. We show that much of a document's perspective is expressed in word usage, and statistical learning algorithms such as SVM and naïve Bayes models can successfully uncover the word patterns that reflect author perspective with high accuracy. In addition, we develop a novel statistical model to estimate how strongly a sentence conveys perspective, in the absence of sentence-level annotations. By introducing latent variables and sharing parameters, the Latent Sentence Perspective Model is shown to capture well how perspectives are reflected at the document and sentence levels. The small but positive improvement due to sentence-level modeling in LSPM is encouraging. In the future, we plan to investigate how consistently LSPM sentence-level predictions are with human annotations. Based the model specification described in Section 4.2 we derive the Gibbs samplers | 789 | 3,101 | 789 |
Learning New Skills after Deployment: Improving open-domain internet-driven dialogue with human feedback | Frozen models trained to mimic static datasets can never improve their performance. Models that can employ internet-retrieval for up-to-date information and obtain feedback from humans during deployment provide the promise of both adapting to new information, and improving their performance. In this work we study how to improve internet-driven conversational skills in such a learning framework. We collect deployment data, which we make publicly available, of human interactions, and collect various types of human feedback -including binary quality measurements, free-form text feedback, and fine-grained reasons for failure. We then study various algorithms for improving from such feedback, including standard supervised learning, rejection sampling, modelguiding and reward-based learning, in order to make recommendations on which type of feedback and algorithms work best. We find the recently introduced DIRECTOR model (Arora et al., 2022) shows significant improvements over other existing approaches. | Large language models employed as dialogue agents are primarily trained on human-written documents and human-human conversations collected from the web for pre-training | Input: What's happening in F1 these days? Response: F1 is a metric used for classification. In this work, we study learning from the feedback collected during deployment of models in human-model conversations. Such a setting has the opportunity to learn from within-distribution data, both in terms of the input contexts, but also the responses required (targets). Not only can this mean improvement in skills that are similar to the pre-train and fine-tune data, but potentially the learning of completely new skills -that are desired by users of the system. We thus take existing state of the art internet-augmented models such as BlenderBot 2 We then explore a variety of methods for learning from feedback, and compare them in detailed experiments. In particular, we compare supervised learning methods, rejection sampling, model guiding and reward-based learning. Our findings are: • Taking advantage of modular feedback (feedback about particular errors from modules of the model, such as the search engine component) outperforms feedback about just the final response. • Textual and binary feedback are also very useful signals, but not as much as modular feedback. • The recently introduced DIRECTOR method • Combining multiple types of feedback, such as modular and binary feedback with DIREC-TOR provides the best results we obtained. • Continual learning, whereby we retrain models on the feedback from previous rounds of deployment, improves results even further. • Despite collecting feedback from smaller (3B parameter) models, the data collection is useful for improving much larger (175B parameter) models. We make the collected data and feedback, the models, and the code publicly available for this work There are a number of existing methods for collecting human feedback from human-model conversations. Deployed models can be improved in symmetric conversations conducted between models and humans by learning to mimic human conversationalists, as shown in the LIGHT dialogue game Outside of the dialogue domain, there are numerous studies attempting to improve language skills from deployment, including never-endinglearning from language data 3 Deploying and Collecting Feedback To select an input distribution closely aligned with human preferences, we first collected a set of skills humans would like an AI powered text-messaging chatbot to possess. We instruct that the hypothetical chatbot can talk about any topic, and has the ability to surf the internet for information. We then asked each human annotator to provide: (i) a topic (1-10 words), (ii) three tasks related to the topic; and (iii) descriptions of how they would assess if the chatbot has completed those tasks. See Appendix subsection A.1 for a screenshot of the task definition collection instructions, and further details. Overall, we collected 1108 task types via 152 annotators, which cover diverse topics -from making healthy food to loom weaving to Caribbean holidays. Grouping them into types, they include question answering followed by discussion, providing ranked lists, providing reviews, summary generation, personal recommendations, reasoning/deductions (e.g., how to perform calculations), creativity (e.g., tell a joke), tutorials, instructions, and more. Many of these tasks require, or else are made simpler, by use of the internet, e.g., searching for particular entities or topics, and responding conditioned on pertinent results. Some examples are given in Table After collecting topic and task definitions, the next step is to deploy conversational models (bots) that are asked to exhibit these skills. Human conversationalists select a task (out of two randomly chosen tasks) from the set collected in subsection 3.1 and then ask the model to help them complete it over a series of conversational turns. The instructions emphasize that this should be a dialogue ("a back and forth conversation"), and hence the speakers should break up requests or information across messages so that it remains conversational. The human conversationalist is instructed that the bot might not be perfect, in which case feedback can be given in order to improve the bot in the future. We collect various kinds of feedback, from lightweight feedback (binary label or free-form response) to detailed (multiple choice and fine-grained responses) such that in our experiments we can compare and contrast them in order to make recommendations on which kinds of feedback work best. Hence after each dialogue turn we collect the following set of feedback types: • Binary feedback on whether the response was considered satisfactory or not. • Free-form textual feedback on what was wrong in the case of an unsatisfactory response. • Multi-choice input on how the bot could improve this turn: (a) using a better search query; or (b) paying more attention to relevant search results; (c) some other issue; or (d) no issue (a good response). • In the case of selecting (a), the human is then asked what would be a more appropriate search query. • In case (b), the human is shown the search results and asked to select a relevant portion. • In case (c), the human is asked what would be an improved overall response. Continuing the conversation After feedback has been given, the conversation is continued. If multiple-choice option (a) was selected previously, the bot on this next turn is forced to use the "gold" search query given by the user. Similarly, for (b), the provided gold knowledge context is added to the input of the model. In the case of (c), the bot is simply bypassed, and it is assumed to have provided the given gold response. In this way, even for a poorly performing bot, headway can be made in the conversation towards completing the task, and collecting feedback on its subsequent stages. (Without such a procedure, the bot may just get stuck in a poor quality loop, and then there would be no choice but to abandon the conversation.) The conversation is continued until the human marks the task as complete or a minimum of 4 turns has been completed. When the task is complete we also collect a final rating (out of 5) for the bot's performance. We consider the following set of state of the art publicly available conversational models: • BlenderBot (BB1) parameter Transformer model pre-trained and fine-tuned on dialogue data to exhibit conversational skills; however these models have no ability to use the internet, but simply generate responses given the dialogue context. • BlenderBot 2.0 (BB2) • SeeKeR • OPT-175B We can evaluate model performance during conversations between humans and the deployed models, as humans are providing direct feedback on the conversational responses from the model. In particular we can measure the number of good responses (with no issue), the average final rating, and compute a breakdown of error types (better search query, results or other issue). Overall, we collect over 210k human-bot utterances in over 14k dialogues (episodes), with feedback for each of the bot utterances. The data is split into three major portions: v1, v2, and test unseen splits, see Table The v2 split consists of dialogues and feedback with the new models that were trained using the v1 data. This data is again split into train, valid and test dialogues. We can then repeat this process and train models on the v2 data as well. Finally, the unseen test split consists of completely new skills (topics and tasks) unseen in the v1 and v2 splits, and is used to test transfer of v1 or v2 based models to these new skills. Data Quality and Verification We also verified the quality of our data. For each conversation, we ask 3 human crowdworkers to rate the bot and human's performance and also assess if the bot was able to complete the given task. We consider the task as complete if 2 out of the 3 annotators labeled the task as complete. We see that in 90.4% of the cases the task is completed. Note that with the feedback from the human (see section 3.2) the human-model conversation should always progress even if the model has errors so ideally if the human is doing a perfect job this would be 100%. We also assess the quality of the human conversationalist directly and ask annotators to "rate the human's messages in defining, clarifying, and helping the bot complete the task on a scale from 1-5 (1 = was not helpful at all in helping the bot complete the task, 5 = guided the bot to complete the task)." For conversations where the task was completed, the human conversation partner's messages were rated at an average of 3.8. For conversations where the task was incomplete, the human conversation partner's messages were rated at an average of 3.5. In the following, we will describe the methods we will experiment with for learning from the collected human feedback. The easiest to use type of feedback, with perhaps the strongest learning signal, is a provided gold response by the user for a given dialogue context. One can simply continue to fine-tune the model on the set of collected gold responses (from case (c) in section 3.2). One can optionally also add all the bot responses that were marked as good to the fine-tune set as well (case (d) in section 3.2). We use the validation set to choose the weighting between these two types of supervised data. Using the multiple-choice feedback on the types of improvement, the model can learn to improve those individual components of the model. For BB2 and SeeKeR one can use provided gold search queries (case (a) in section 3.2) directly to fine-tune the search query generation. Provided gold knowledge responses (relevant search results, case (b) in section 3.2)) are similarly easy to use for fine-tuning in the SeeKeR model because the model is already trained to generate such responses directly. For BB2, there are no direct knowledge responses as this is implicit in FiD, so in that case we use a similar method to Hancock et al. ( For free-form textual feedback, we can also use a similar approach and simply fine-tune with the feedback as targets, with special tokens appended to both the input context and the feedback target, again following Using the binary satisfaction feedback signal one can train a reward model. We employ a 311M parameter transformer pre-trained on pushshift.io Reddit Rejection sampling/reranking relies on the set of generated candidates containing at least one good candidate, and has no effect on the initial quality of the candidate generations themselves -it only scores the final generated sequences. We next consider using a reward model trained via subsection 4.4 to train the generation model itself. Given training set contexts, we generate candidates, rerank the candidates, and select the highest ranking. We then train the generation model to use those highest ranking candidates as targets, i.e. by fine-tuning with those targets. This is similar to the approach used in Thoppilan et al. ( The recently introduced DIRECTOR model We provide automatic evaluation results in Table Internet-augmentation helps First, this is an expected result, due to the nature of our tasks, but we find that using internet-augmentation helps in line with other internet-based dialogue tasks Human feedback helps Across the board we find different kinds of feedback can improve our base models BB2 3B and SeeKeR 3B; we will analyse specific methods further in the subsequent discussion. These overall improvements can be seen in terms of all the human evaluation metrics measured (Good response%, Rating, and all three Error Breakdown types), as well as the automatic evaluation metrics we measured (F1 and PPL). We also generally (although not in every single case) see correlation between automatic and human evaluation metrics, e.g. the best methods are best in both types of metric. Modular superior to non-modular feedback In the modular feedback setting humans give feedback about what has gone wrong in the pipeline of the model: whether the internet search query was poor, or the document/knowledge chosen after searching was poorly chosen. Taking into account modular feedback outperforms using only supervised feedback of final responses in both automatic metric and human evaluations for both BB2 and SeeKeR models. For BB2 we see close to 2% improvement in Good responses for modular feedback compared to supervised feedback (40.3% → 42.0%), with both far superior to BB2 without feedback (33.2%). However, SeeKeR which has a modular design, and hence is much easier to supply modular feedback to (as the supervision can directly train each module) sees a larger improvement of 4.5% (52.2% → 56.7%). Free-form feedback is useful (but not as much as gold labels) Free-form feedback also gives clear gains over the baseline model for both BB2 and SeeKeR, but falls short of supervised feedback by 3% and 1% respectively for the two model variants. This does not seem surprising as supervised feedback directly gives a clear loss to optimize (simply try to generate the suggestion) whereas feedback is less clear a signal, depending on how it is phrased. However, we do not rule out other free-form feedback algorithms giving better results in the future, see e.g. Iterative deployment and feedback collection improves results further During the process of evaluating all the models that were trained with v1 data described above, more data was collected from those models, which we refer to as the v2 split (see subsection 3.5). We can thus then train models on the v2 split, yielding potentially improved models. In the ideal case one could conduct an iterative continual learning setup, each time retraining on the data collected from previous rounds, improving further each time. We test this setup by training DIRECTOR (module+binary feedback), our best system from v1, with the v2 data split. The result shown in Table Very large models benefit from feedback from smaller models OPT-175B, either in zero-shot or few-shot variants is only pre-trained on dialogue data, and not fine-tuned on our task, and performs reasonably -but not better than smaller models that are fine-tuned. BlenderBot 3 In conclusion, we have studied whether a conversational model can learn new skills after the standard pre-training / fine-tuning setup by interacting with humans during its deployment. We study the use of different kinds of user feedback data and different learning algorithms for leveraging them, in order to compare their performance. We find that granular (modular) feedback about types of errors can yield strong performance, which can also work very well in conjunction with binary feedback using the recently introduced DIRECTOR model, yielding our best results. Evidence also suggests that iterative retraining and redeployment also brings further gains, and that the feedback collected is useful for models differing from the ones originally conversed with, e.g., if much larger models are used in the future. All of our experiments have taken place by deploying conversational agents on Amazon Mechanical Turk with crowdworkers In public deployments with organic users, safety issues also become a much more important factor -in particular dealing with noisy or adversarial inputs and feedback. In the worst case this could mean human conversationalists could teach the model erroneous reasoning, misinformation, toxic or other undesirable behavior. We note that steps to address this issue are studied elsewhere, for example Failures Despite showing continual improvement by re-training on collected human feedback, our models, like other state-of-the-art dialogue models, can still make common mistakes during deployment. Failure cases are shown in Figure We use the openly available ParlAI framework for all 3B model training runs, as well as for evaluations, where metrics are measured using default settings. All the 3B fine-tuned models are trained with a maximum of eight 32GB GPUs (NVIDIA V100), optimized with Adam using β 1 = 0.9, β 2 = 0.999, ϵ = 1e -08. Models are trained up to 8000 updates with batch size up to 128. The typical fine-tuning time for the 3B retrieval-based BB2 and SeeKeR models is around 24 hrs before it early stops. | 1,012 | 168 | 1,012 |
Learn and Consolidate: Continual Adaptation for Zero-Shot and Multilingual Neural Machine Translation | Although existing multilingual neural machine translation (MNMT) models have demonstrated remarkable performance to handle multiple translation directions in a single model and achieved zero-shot translation between language pairs unseen in training, they still suffer from relatively poor translation qualities for some language pairs. A practical scenario is that how to continually update MNMT models for both supervised and zero-shot translations when limited new data arrives. To this end, we propose a two-stage approach that encourages original models to acquire language-agnostic multilingual representations from new data, and preserves the model architecture without introducing parameters. Experimental results and further analysis demonstrate that our method can efficiently improve performance of existing MNMT models in translation directions where they are initially weak, and mitigates the degeneration in the original well-performing translation directions, offering flexibility in the real-world scenario. 1 | Existing multilingual neural machine translation (MNMT) models, such as mBART Fortunately, new parallel sentence pairs will continually emerge between high-resource lan- guages, e.g., German↔Chinese, which can be used to facilitate translation directions with poor performance through supervised learning In this scenario, we aim to continually improve performance for both new supervised and zero-shot translations while retaining previously acquired knowledge in the other translation directions using only new data. In accordance with this scenario, an intuitive solution is to introduce additional parameters for continual adaptation, regarded as parameter-isolation based methods In this work, we propose a two-stage method consisting of a learning stage and a consolidation stage for continual adaptation (LCCA), which efficiently adapts MNMT models to diverse translation directions. We first introduce a flexible pluggable module in the penultimate encoder and decoder layer, respectively, as the multilingual adaptation space. And the introduced multilingual space is optimized with contrastive learning to make representation language-agnostic for zero-shot translations, realizing the learning stage. Then we attempt to compress the additional parameters to the same size of base MNMT models, regarded as a consolidation stage. The second stage adopts the information matrix and collaborative distillation to facilitates original components to learn introduced modules in a specific range. Furthermore, aside from the related components, all the parameters of the original model remain fixed, allowing for a parameter-efficient manner to large-scale MNMT models. The two stages are also an isolated process designed to adapt to diverse application requirements in real-world scenarios. To sum up, our contributions are as follows: • We propose LCCA, a two-stage approach that encourages to learn new knowledge for continual adaptation, while mitigating performance degeneration without introducing additional parameters. • The ability of the original MNMT model to translate into a target language is enhanced via acquiring language-agnostic representation, which improves performance for zeroshot translations in continual learning. • Experimental results demonstrate the efficiency and flexibility of our approach in adapting various powerful and open-source MNMT models of different sizes to new parallel data. | Zero-Shot Translation with MNMT Models MNMT models have demonstrated their ability to facilitate knowledge transfer across languages and enable zero-shot translations between language pairs that are not covered in training data For instance, Continual Learning for MNMT Some previous methods of continual learning attempt to address the issue of catastrophic forgetting when only the new data is accessible On the other hand, PET methods introduce additional task-specific parameters and freeze all original parameters to completely retain performance on previous tasks Figure 2: Illustration of our approach LCCA. [ Source A, Target P ] denotes a positive sample that is the bilingual or code-switching sentence pair, while Target Y 1:N-1 denotes N -1 negative samples selected from the same batch for contrastive learning. The blue modules are frozen and the red modules are trainable during their own stage. This enhanced flexibility makes it well-suited for real-world scenarios. Our scenario is to efficiently improve performance for some particular translation directions without compromising previous well-performing translations. To achieve this, we propose a two-stage method to alleviate the issues of continual adaptation in particular translation directions at different stages, as shown in Figure Multilingual translation models can provide highquality translation services on many language pairs and are trained on selecting available parallel data with multiple language pairs. Given the anticipated growth in available data, it becomes possible to continually update the multilingual models for both their original and newly emerging translation directions, as illustrated in Figure Formally, the training process of an MNMT model commences with an initial set of available parallel data denoted as D = D 1 , ..., D i , ..., D M , encompassing a total of M languages. Each D i represents the training corpus for the i-th language pair. In this framework, the primary MNMT model, given an input sentence x, undergoes optimization by maximizing the log-likelihood L for the groundtruth sequence y. This is mathematically expressed as: Here, θ represents the parameters of the MNMT model. For language identification, a specific language token is prepended to the beginning of both source and target sentences, following the convention introduced in prior work We aim to achieve supervised translation directions between high-resource languages (L s and L t ) and zero-shot translation from M languages L 1 , L 2 , ..., L M to the target language L t with the help of only the newly available data D ′ = {x, y}. Due to the unavailability of the original collection of parallel data, the optimization objective in continual learning is given by: As a result, we aspire to continually update MNMT models for both supervised and zero-shot translations using limited new data. One of the crucial steps in this task involves acquiring new knowledge from newly available data. Previous studies show that the FFN layers can be seen as key-value memories and store knowledge in this manner As shown in Figure The learning module is optimized by minimizing a cross-entropy loss of the parallel sequence pairs: where J is the length of the target sentence. Our scenario also focuses on facilitating the ability of the original MNMT model to translate into a target language, achieving zero-shot translation. We argue that during the learning stage for continual adaptation, the additional learning module in the encoder lacks to align parallel sentences of supervision that can bridge the representation gap across different languages. To this end, we introduce a contrastive learning loss as an auxiliary supervision to learn language-agnostic representations for the encoder. Given a sentence pair (x, y) from new training data D ′ , we denote it as a positive example and select a set of target sentences i=1 from the same batch as negative examples. The contrastive loss is given by: (5) where sim is the cosine similarity function and τ is the temperature parameter which is set to 0.1. Due to introduced parameters in the learning stage, the architecture of original models is modified. Not only the model increases the total number of parameters, but also it requires to determine the specific task to which the input sentence belongs. Therefore, we aim to compress the model from the previous stage to the same size of original models, preserving the model architecture without introducing parameters. By employing a knowledge consolidation approach, we integrated the knowledge from the learning module into the original models. In the consolidation stage, we assume the two separated modules (i.e., original FFN and previous learning module) as a cooperative relationship, where each module is beneficial for translating into different target languages. In the form of distillation, knowledge can be transferred to the distilled model through the distribution where θ f and θ l represent the parameters of original FFN and learning modules, respectively. In addition to retrospecting the original knowledge, we also intervene in the stage of consolidation to make it relatively smooth. Given the model parameters θ and the model distribution p(x|θ), it is natural to maximize the likelihood function to optimize θ. To evaluate the optimization for θ, we define a score function s(θ) = ∇ θ log p(x|θ) and its measure of uncertainty that is regarded as the covariance of the models, representing the degree of correlation between two arbitrary variables that change together: In this task, we can approximate the expectation in F using empirical distribution q(x), which is given by the parallel training data: The role of F is a measure of curvature of the optimization and has a connection to our L CKD . This gives rise to natural gradient loss L CKD-F with the information matrix F which can define the local curvature in distribution space: where λ is a hyper-parameter to balance the original parameters and learning modules. θ * f represents the original parameters of FFN with computing the information matrix. Thus we can optimize the original FFN in a smooth region, preserving the previously acquired knowledge. Note that we utilize a small-scale set corresponding to the previous task to calculate the F matrix. Parallel Data In this work, we focus on continual adaptation for MNMT models, aiming to enable the model to improve performance of supervised and zero-shot translation using only new data. To ensure the reliability and reproducibility of the experiments, we provide the German-Chinese (dezh) bilingual data considered for continual adaptation, as the newly available training corpus Model Configuration In our scenarios, we have chosen to employ the mBART50-nn Baselines We compare our proposed LCCA with various representative methods in continual learning and transfer learning for continual adaptation. The baselines can be listed as follows: • Scratch • mBART50-nn • Fine-Tuning • Mixed-FT • EWC • LFR • LoRA • Prefix-Tuning (Li and Liang, 2021): prepending prefixes to the keys and values at every self-attention layer. • Adapter We evaluate the performance of all translation directions using the FLo-Res testsets The training time of each method is reported in kiloseconds. All models are implemented using the open-source toolkit fairseq For more detailed information, please refer to Appendix B. As presented in Table As shown in Figure As indicated in Table As shown in Table As shown in Table As shown in Figure To investigate the zero-shot translations for continual adaptation, we present a visualization of multilingual representations. It studies the shared representations for MNMT models, which can observe their semantic equivalents among multiple languages We further investigate the training cost compared with the stronger baselines. The results show that LCCA can improve the efficiency to continually adapt original models to new languages. LCCA can In this work, we propose a two-stage method of learning and consolidation to improve performance of pre-trained MNMT models when new data arrives in both supervised and zero-shot translations. It extends multilingual language-agnostic space with the contrastive learning for new data adaptation in the learning stage, and then adopts a collaborative distillation with an information matrix to consolidate both previously and newly learned knowledge for module integration. As a result, our proposed method, LCCA, achieves continual adaptation to multiple translation directions while keeping the model architecture intact. Experimental results demonstrate that the learning stage encourages original models to learn new knowledge from updated parallel data and the consolidation stage mitigates the performance degradation on previous well-performing translation directions when the model compresses. Further analyses reveal that our method effectively captures linguistic features and bridges the gap of shared representation space for comprehensive continual adaptation. In this work, we aspire to continually improve performance of MNMT models in translation directions where they are initially weak, and alleviate the issue of degeneration in the well-performing translation directions. In addition to the advantages mentioned, our method does have certain limitations as follows: (1) Limited available data: We only utilize the parallel data on one language pair for continual adaptation in this work. There are many diverse datasets available, including monolingual data and parallel data on extremely low-resource target language. (2) Sequential adaptation: This work only considers adapting the original models to new data continually once. However, multiple parallel data are available in a sequential manner. Due to the uncertainty about potential data exposure from the newly available data to the test sets, we plan to carefully design and explore this scenario in future In this work, we utilize parts of the FLoRes devtest as our test set, which covers 50 languages and follows the policy in a multi-source setting. The test set contains 1012 sentences in each language and is divided into different language groups with linguistic diversity for evaluation. It is worth noting that a language family refers to a collection of languages that share a common ancestral language, often referred to as the proto-language 5 . Variations in grammar and word order can be observed across different language families 6 . And performing cross-lingual transfer and zero-shot translation between languages with significant differences is challenging. The FLoRes follows the CC-BY-SA 4.0 license that can be freely used for research purposes As depicted in Figure To further underscore the efficiency of LCCA, we examine the training time in comparison to the more robust baseline models, as depicted in Figure 6. Although our method has two stages, the results show that training time of LCCA is close to that of the parameter-efficient methods and shorter than the other methods with fully tuning, which is more efficient and practical for continual adaptation. | 1,025 | 2,424 | 1,025 |
A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation | Recent neural language generation systems often hallucinate contents (i.e., producing irrelevant or contradicted facts), especially when trained on loosely corresponding pairs of the input structure and text. To mitigate this issue, we propose to integrate a language understanding module for data refinement with selftraining iterations to effectively induce strong equivalence between the input data and the paired text. Experiments on the E2E challenge dataset show that our proposed framework can reduce more than 50% relative unaligned noise from the original data-text pairs. A vanilla sequence-to-sequence neural NLG model trained on the refined data has improved on content correctness compared with the current state-of-the-art ensemble generator. * Contribution during internship at Microsoft. | Neural models for natural language generation (NLG) based on the encoder-decoder framework have become quite popular recently Given that similar issues have been less reported or noticed in the latest neural machine translation systems, we believe that the origin of the issue for neural NLG comes from the data side. Current datasets used for training neural NLG systems often include instances that do not contain the same amount of information from the input structure and the output text originally intended for surface realisation ("how to say") without focusing on content selection ("what to say"). Table Previous work attempted at injecting indirect semantic control over the encoder-decoder architecture In this paper, we propose a simple, automatic recipe towards reducing hallucination for neural surface realisers by enhancing the semantic equivalence between pairs of MRs and utterances. The steps include: (1) Build a language understanding module (ideally well-calibrated) that tries to parse the MR from an utterance; (2) Use it to reconstruct the correct attribute values revealed in the reference texts; (3) With proper confidence thresh-olding, conduct self-training to iteratively recover data pairs with identical or equivalent semantics. Experiments on the E2E challenge benchmark | Our proposed framework consists of a neural natural language understanding (NLU) module with iterative data refinement to induce semantically equivalent MR-text pairs from a dataset containing a moderate level of noise. Formally, given a corpus with paired meaning representations and text descriptions {(R, X)} N i=1 , the input MR R = (r 1 , . . . , r M ) is a set of slotvalue pairs r j = (s j , v j ), where each r j contains a slot s j (e.g., rating) and a value v j (e.g., 5 out of 5). The corpus has M pre-defined slots , and each slot s j has K j unique categorical values v j ∈ (c j,1 , . . . , c j,K j ). The corresponding utterance X = (x 1 , . . . , x T ) is a sequence of words describing the MR. As shown in Figure Self-Attentive Encoder. The encoder produces the vector representations of slot-value pairs in MR and its paired utterance. A slot-value pair r can be treated as a short sequence W = (w 1 , . . . , w n ) by concatenating words in its slot and value. The word sequence W is first represented as a sequence of word embedding vectors (v 1 , . . . , v n ) from a pre-trained embedding matrix E, and then passed through a bidirectional LSTM layer to yield the contextualized representations U sv = (u sv 1 , . . . , u sv n ). To produce a summary context vector for U sv , we adopt the same selfattention structure in The Golden Palace Scoring Utterance 𝑋 representations Attentive Scorer. The scorer calculates the semantic similarity between a slot-value pair r (e.g., Price=Cheap) and the utterance X (e.g., reference in Table (1) Model Inference. Each utterance X will be parsed to an MR R e = (r e 1 , . . . , r e M ), with each slot-value pair r e j = (s j , v j ) determined by selecting the candidate value v j with the maximum semantic similarity for each slot s j : where c j,k denotes the kth categorical value for jth slot. Since an utterance may not describe any information about a specific slot s, we add a NONE value as a candidate value of each slot. Model Training. The NLU model is optimized by minimizing the cross-entropy loss: where θ denotes model parameters, and r i,j denotes the jth slot-value pair in the ith training MR. The performance of NLU can be inaccurate when trained on noisy data-text pairs. However, models trained on data with a moderate level of noise could still be well-calibrated. This could enable an iterative relabeling procedure, where we only take MRs produced by NLU with high confidence together with their utterances as new training MRtext pairs to bootstrap the NLU training. Algorithm 1 describes the training procedure. We first pre-train the NLU model using the original data-text pairs for N pre iterations. Then the NLU model parses relevant MR for every utterance in training data, which can be used as new training examples (Line 4). However, due to the inaccuracy of the NLU results, we only use a small portion (φ is set to 40% on validation) with high confidence. Moreover, as each MR consists of up to M slots with some of them being unreliable, we filter the slot-value pairs with slot probability below average according to slot confidence (Line 8 -14). Finally, the NLU model is fine-tuned with the new training corpus D e . This process is repeated for N tune epochs. The final NLU model is leveraged to parse all utterances in the training corpus. The resulting MRs paired with original utterances form the refined training corpus for NLG. Dataset. Our experiments are conducted on E2E challenge MR confid. Sort {(R e , X)} N 1 by MR confidence in reverse order 8: Remove r e i,j from R e i 12: end if 13: end for 14: end for 16: Update θ with Eq. 3 on D e 17: end for of MR-text pairs, and M is the number of wrong MR-text pairs which contain missing or conflict slots in the realization given its input MR. BLEU-4 Human Evaluation. We randomly sample 100 data-text pairs from test set and ask three crowd workers to manually annotate missed (M), added (A), and contradicted (C) slot values in NLG outputs with respect to the input MR, or exact match (E) if all slot values have been realized in the given utterance which contains no additional hallucinated information. When evaluating the NLU systems, missed and added slots refer to the opposite directions, respectively. • TGen Implementation Details. For all models, we use fixed pre-trained GloVe vectors NLU Results. One challenge in E2E dataset is the need to account for the noise in the corpus as some of the MR-text pairs are not semantically equivalent due to the data collection process Table NLG Results. Slug2Slug. Seq2Seq+aug+iter largely improves the content correctness over the baseline Seq2Seq with 67.3% error reduction. Besides, we also replace our NLU module with the rule based aligner crafted by Original MR: name[The Phoenix], eatType[pub], food The Mill is a high priced family friendly fast food pub located near Caf Sicilia in the riverside area. children friendly pub in the riverside area near Caf Sicilia. It has a high price range and a high customer rating The Mill is a family friendly pub located near Caf Sicilia. The Mill is a children friendly fast food pub near Caf Sicilia in the riverside area. It has a high price range and an average customer rating. Table tradicted errors. Our method can complement and correct original MR with additional slot values described in the paired texts to effectively alleviate generating contradicted facts. However, due to the imperfection of NLU model, our method may ignore part of slot values realized in utterances and produce some additional errors. Example for refined data. In this paper, we present a simple recipe to reduce the hallucination problem in neural language generation: introducing a language understanding module to implement confidence-based iterative data refinement. We find that our proposed method can effectively reduce the noise in the original MR-text pairs from the E2E dataset and improve the content coverage for standard neural surface realisation (no focus on content selection). However, the currently presented approach still has two clear limitations. One is that this simple approach is implicitly built on an assumption of a moderate level of noise in the original data, which makes it possible to bootstrap a well-calibrated NLU module. We are still on the way to find out solutions for cases with huge noise The other limitation of this preliminary work is that it currently overlooks the challenges of lexical choices for quantities, degrees, temporal expressions, etc, which are rather difficult to learn merely from data and should require additional commonsense knowledge. An example case is in Table | 803 | 1,302 | 803 |
Seen to Unseen: Exploring Compositional Generalization of Multi-Attribute Controllable Dialogue Generation | Existing controllable dialogue generation work focuses on the single-attribute control and lacks generalization capability to out-of-distribution multiple attribute combinations. In this paper, we explore the compositional generalization for multi-attribute controllable dialogue generation where a model can learn from seen attribute values and generalize to unseen combinations. We propose a prompt-based disentangled controllable dialogue generation model, DCG. It learns attribute concept composition by generating attribute-oriented prompt vectors and uses a disentanglement loss to disentangle different attributes for better generalization. Besides, we design a unified reference-free evaluation framework for multiple attributes with different levels of granularities. Experiment results on two benchmarks prove the effectiveness of our method and the evaluation metric. | Recently, large pre-trained language models (PLMs) like DialoGPT unseen multi-attribute combination seen multi-attribute combination (b) A-ACC Figure Although these methods have made some progress in CDG, most of them focus on singleattribute generation where there is only one attribute label like happiness in emotion and pay less attention to the multi-attribute generation, which is a more practical setting. Therefore, we are committed to filling this gap in CDG. Noted that different from single-attribute, the control signal of the multiattribute generation is a combination of multiple values from different attributes, which faces the challenge of lacking sufficient annotated attributespecific data. We also find state-of-the-art methods for multi-attribute controllable text generation In this paper, we try to explore the compositional generalization for multi-attribute controllable dialogue generation where a model could learn from seen attribute values and generalize to unseen combinations. Figure Furthermore, to unify the evaluation of different granularity attributes, we design a novel and general reference-free evaluation framework, i.e. Multiple Attribute Evaluation (MAE), to measure the consistency between desired seen/unseen attribute combinations and generated responses. Specifically, the evaluation of each attribute is converted to a text-to-text generation task based on T5 Our contributions are as follows: (1) To the best of our knowledge, we are the first to explore the compositional generalization for multi-attribute controllable dialogue generation and find existing models lack generalization capability to outof-distribution multi-attribute combinations. (2) We propose a disentangled controllable generation, DCG, which learns attribute concepts from seen values to unseen combinations via a shared mapping of attribute-oriented prompts and uses a disentanglement loss to disentangle different attribute combinations. (3) We introduce a unified reference-free evaluation framework, MAE, for different granularities of attributes. Two benchmarks are established and sufficient experiment results prove the effectiveness of our method and evaluation metric. | Controllable Dialogue Generation Currently, there have existed many studies on CDG As shown in Figure To better use the control signals, we design two types of prompts to elicit the attribute-related information from the PLM: Attribute-oriented Prompt We use the combination of controlled attribute values corresponding to each instance as prompts to guide the model to focus on the controlled information in the dialogue. Here, the controlled attribute values are discrete attribute labels in DailyDialog or continuous attribute descriptions in ConvAI2. The multiple attribute values a i,• in the corresponding combination c are simply concatenated as an attribute-oriented prompt sequence, i.e., p att = [a 1 , b 2 , ...]. We encode the prompt tokens using the word embedding layer of a pre-trained DialogGPT and then employ a shared MLP θ 1 to generate the embeddings E att of the attribute-oriented prompts. Note that we don't require independent parameters for each attribute value like Finally, we concatenate the two prompt embeddings as the whole prompt embeddings, i.e., E p = [E att ; E task ]. Given an instance (d, c), d is the dialogue history and c is the combination of controllable attribute values. To force the model to distinguish different combinations of multiple attribute values, we design some pseudo combinations to enhance the diversity of the prompts, which improves the generalization ability of our model. A disentanglement loss L D is further introduced to disentangle the combination representations and train multiple compositional prompts simultaneously: (1) where C pse is the set of pseudo combinations and at least one value in the combination c ′ is different from the corresponding value in the golden combination. We use DialoGPT where T is the length of generated sequence, i.e., the dialogue history and response. φ is the parameter of the PLM and is fixed. The parameters of two prompts, θ 1 and θ 2 , are the only updated parameters. Therefore, the training loss L is the weighted sum of the disentanglement loss and the PLM loss: When the training is completed, we save all parameters of the prompt module. During the inference, the data from the test set is mapped to the representations of prompts only via the embedding matrices, where the features of the attributes seen in the training set can be transferred to the unseen combinations. To fill the gap in metrics for multi-attribute controllable dialogue generation, we propose a unified and efficient evaluation framework without additional large-scale labeled data, as shown in Figure Specifically, the continuous prompt sequence is prepended to the response as a prefix, which makes up the input of the encoder. Another continuous prompt sequence, the attribute values, and the template are concatenated and fed to the decoder. We take the probability of generating "yes" corresponding to In training process, only embeddings of continuous prompts are updated and the parameters of T5 are fixed. Note that our model-based evaluation ap- proach gets rid of the reliance on golden response when tested and can be uniformly applied to various granularities of attributes. 6 Experiments We construct two datasets based on DailyDialog DailyDialog is an open-domain dialogue dataset with two controllable attributes: emotion and act. Here, we treat the labels of the two attributes as an attribute combination, e.g., (surprise, inform). For dialogues, each utterance with two attribute labels is regarded as the response and all preceding texts of this utterance are considered as the corresponding dialogue history. In this way, we get 14,879 examples. We count the attribute combinations labeled in all examples, 18 of which are selected as C v,train and the other 6 are C v,test . Then, the examples are divided into the training set and test set according to the combination set. We also extract 10% samples from the training set as the validation set. ConvAI2-CG ConvAI2 is a persona-based dialogue dataset in which the persona profile of each dialogue is consisting of 4 or 5 personalized sentences. We treat each sentence as an attribute value and the sentences in the same position belong to the same attribute. The persona profile is regarded as an attribute combination, e.g., ("My mom is my best friend.", "I've four sisters.", "I believe that mermaids are real.", "I love iced tea."). For each dialogue, we choose the first 4 utterances as the dialogue history and the 5th utterance as the response. Consistent with the processing method of DailyDialog-CG, we select 11,566 combinations as C v,train After that, we obtain the corresponding training set, validation set, and test set. The statistics about the two datasets are shown in Table We compare our methods with several competitive baselines. The common dialogue generation models are included: (1) DialoGPT-Ori In this work, we focus on evaluating the attribute controllability and text quality for different controllable generation methods. Attribute Controllability It aims to evaluate whether the method can generate responses constrained by multiple attributes successfully. 1. For the control of coarse-grained discrete attributes in DailyDialog-CG, we use the classification accuracy, i.e., E-ACC and A-ACC, for each attribute computed by an independently trained Roberta classifier 2. For the control of fine-grained continuous attributes in ConvAI2-CG, we calculate the cosine similarity between the representations of attribute sentences and the generated response, i.e., P-SIM 3. We propose a unified model-based evaluation metric, i.e., MAE, for various granularities of attributes, the details can be seen in Section 5. Text Quality We use the BLEUs Results on DailyDialog-CG Table Besides, we also concern whether DCG benefits from attribute-oriented prompt, task-oriented prompt, and disentanglement learning. We find that DCG w/o AOP is the same with Prompt-tuning and it performs poorly in attribute controllability, which shows attribute-oriented prompt plays an important role in guiding the model to focus on the controlled information. After removing the taskoriented prompt, the DCG w/o TOP decreases to 19.18%, 6.74%, and 15.63% on text quality, but still maintains high controllability. It proves taskoriented prompt helps improve text quality. We also conduct experiments to prove that TOP can improve text quality when combined with other methods. (See Appendix H). Besides, after removing disentanglement learning, the DCG w/o DL drops significantly, which shows disentanglement learning effectively disentangles attribute combinations and improves the ability of compositional generalization. Results on ConvAI2-CG Table 7 Qualitative Analysis Figure Following Guan and Huang (2020), we adopt Pearson (r), Spearman (ρ), and Kendall (τ ) correlation coefficients between our proposed automatic metric, MAE, and human judgments (details can be seen in Appendix D) to measure the quality of different metrics. Table To show the effect of prompts for compositional generalization, we display a visualization of the concatenated prompt embeddings of two attributes via PCA (Jolliffe and Cadima, 2016) on DailyDialog-CG in Figure To size of training data decreases, the performance of both CTRL and DCG presents a dropping trend and our DCG model is consistently better than CTRL, which confirms our model has a strong capability for multi-attribute controllable dialogue generation. Figure The CTRL only controls "like to skate", while our DCG controls "like to write poetry and skate", which is highly consistent with the golden response. Compared with previous models, our model addresses many difficult issues in compositional generalization for multi-attribute controllable dialogue generation. With an attribute-oriented prompt and a task-oriented prompt, our method learns attribute concepts from seen attribute values to unseen attribute combinations. Through a disentanglement learning, some artificial-constructed unseen pseudo combinations are injected into the training process, which greatly improves the generalization ability of our model. In this paper, we study the compositional generalization for multi-attribute controllable dialogue generation. We propose a prompt-based disentangled controllable dialogue generation model which generates attribute-specific prompt vectors from control codes and uses a disentanglement loss to disentangle different attributes. Further, we develop a unified reference-free evaluation framework, MAE, for multi-attribute generation with different levels of granularities. Experiments and analysis show our method achieves better text quality and controllability scores. Moreover, our proposed MAE has a higher correlation with human judgments for evaluation on CDG. Although DCG achieves significant improvements compared with existing baselines, there are still avenues to be explored in future research. (1) DCG in this paper focuses on the compositional generalization for multi-attribute on controllable dialogue generation. We hope to extend the method to other generative tasks, including but not limited to dialogue summarization and story generation. (2) In this paper, we explored the control of coarsegrained discrete attributes and the control of finegrained ones separately, and we intend to study the combination of these two attributes in future research. Controllable dialogue generation(CDG) is an essential task in Natural Language Processing (NLP) and has been widely studied for decades, which aims to guide dialogue generation toward the desired attributes such as emotions, acts, and personas. In the open-domain dialogue scenario, CDG can generate emotional and diverse responses to enhance the user's sense of participation. In the task-oriented dialogue scenario, CDG can generate responses that meet the user's needs according to the user's intent. However, most previous works focus on singleattribute generation where there is only one attribute label like happiness in emotion and pay less attention to the multi-attribute generation, which is a more practical setting. Different from singleattribute, the control signal of the multi-attribute generation is a combination of multiple values from different attributes, which faces the challenge of lacking sufficient annotated attribute-specific data. Therefore, we explore the compositional generalization for multi-attribute controllable dialogue generation where a model could learn from seen attribute values and generalize to unseen combinations. We also design a novel and general referencefree evaluation framework to unify the evaluation of different granularity attributes. The experimental results prove the effectiveness of our model and evaluation framework. Besides, there is no huge biased content in the datasets and the models. If the knowledge base is further used, the biased content will be brought into the generated responses, just like biased content posted by content creators on the Web which is promoted by a search engine. To prevent the technology from being abused for disinformation, we look forward to more research effort being paid to fake/biased/offensive content detection and encourage developers to carefully choose the proper dataset and content to build the knowledge base. multi-attribute control codes with dialogue history to fine-tune the DialoGPT. CoCon: Proposed by Prompt-tuning: Proposed by We fine-tune multi-attribute prompts for dialogue generation. Note that CatPrompt is only applied to coarse-grained discrete attributes like emotion and act instead of persona. Because persona has a large value set, resulting in numerous parameters (see Table Our implementation is based on the Hugging Face Transformer models We compare the average inference efficiency of our methods with the baselines. As we can observe from means that the generated response is completely inconsistent with the expected attribute label, score 2 denotes that the generated response has the same meaning as the expected attribute label, but no explicit attribute-related words, and score 3 means that the generated response contains some clear attribute words. For the text quality, we ask the annotators to evaluate the fluency and context relevancy of the generated responses on a scale of 1-5, where a higher score indicates better quality. The inter-annotator agreement on the controllability and text quality is 0.63 and 0.61 for DailyDialog-GC, and 0.58 and 0.60 for ConvAI2-CG. For all metrics, the average score of the 5 annotators is treated as the final score. As shown in Table Prompt Length Figure Automatic evaluation metrics are important for text generation tasks, including reference-based like BLEU To prove our model still be useful when the number of attributes varies from training to inference, we train CTRL and our DCG with 4 attributes and inference with 5 attributes in ConvAI2-CG. As shown in Table We prove that task-oriented prompts (TOP) can also improve text quality when combined with other methods. Specifically, we trained CTRL with TOP in our experiments. As Table | 878 | 2,198 | 878 |
Application-Agnostic Language Modeling for On-Device ASR | On-device automatic speech recognition systems face several challenges compared to server-based systems. They have to meet stricter constraints in terms of speed, disk size and memory while maintaining the same accuracy. Often they have to serve several applications with different distributions at once, such as communicating with a virtual assistant and speech-to-text. The simplest solution to serve multiple applications is to build application-specific (language) models, but this leads to an increase in memory. Therefore, we explore different data-and architecture-driven language modeling approaches to build a single application-agnostic model. We propose two novel feed-forward architectures that find an optimal trade off between different on-device constraints. In comparison to the applicationspecific solution, one of our novel approaches reduces the disk size by half, while maintaining speed and accuracy of the original model. | On-device Automatic Speech Recognition (ASR) is subject to several constraints: it should return accurate results in a reasonable time frame without consuming too much memory and disk space. State-of-the-art research often is accuracy focused, while resource-constrained applications also need to take care of performance and size. Finding an architecture that reaches all constraints is not trivial. Another challenge is that ASR systems often serve a large variety of requests. ASR systems can serve an on-device Virtual Assistant (VA) but also allow dictated messages, notes, e-mails, etc.we refer to the latter application as Speech-to-text (STT). Typical VA requests are knowledge-driven questions such as "how old is Barack Obama?" or commands, e.g. "play some Lady Gaga music". STT requests are longer and of a different nature than typical VA requests. The solution that yields the best accuracy for both VA and STT is to train separate models for each application, but additional model size is prohibitive. We aim to develop a single model instead. In this paper, we focus on a Neural Network Language Model (NNLM) in the ASR system. Our baseline is a Fixed-size Ordinally-Forgetting Encoding (FOFE) feed-forward NNLM To build a single Application-Agnostic (AA) NNLM, we developed a method to optimally sample training data. We sample data from different sources, e.g. anonymized and randomly sampled user requests from opted-in users for VA and STT and artificial requests spanning many different domains that focus on improving the tail of the distribution. The data-driven approach tries to find the optimal balance between the application-specific data sources by creating a balanced development set and distributing the sampling weights based on the importance of each data source and each application on that development set. Training a single FOFE NNLM on the combined dataset can lead to accuracy degradations, even with a larger model or longer training. We explore two extensions to the baseline FOFE NNLM: firstly, a Mixture FOFE NNLM The contributions of this paper are as follows: • We propose a method to optimally combine application-specific data sources to train an application-agnostic LM in Section 3. • We propose two novel FOFE-based neural LMs in Section 4 that each match the accuracy of two application-specific language models. • In Section 6 we compare the novel NNLMs accuracy and speed against the baseline FOFE and state-of-art Transformer models. We do this for three different languages -US English, German and Mandarin Chinese -and three types of test sets (see Section 5 for more information). | We start by discussing related work on modeling several domains/tasks at once. Many pattern recognition tasks are imbalanced since data from different categories do not occur at the same frequency. Therefore, the less frequent categories are not well represented in the training data The choice of architecture for language modeling has also been a recurrent topic of research. Early neural LMs use feed-forward layers Recent extensions of the feed-forward architecture have been proposed that alleviate different disadvantages. The ASR system in this paper serves two applications, VA and STT, for which we observe very different linguistic patterns. To demonstrate these differences, we calculate statistics on two English development sets. Each data set contains 23k anonymized queries and is randomly sampled from real user data similarly to the test sets described in Section 5. and "wake1" refer to "<wakeword_2> and "<wake-word_1>. VA requests are typically shorter than STT requests. In Figure Secondly, the content and style of the requests varies between the two applications. Figure Because of the different linguistic nature of these two applications, balancing the NNLM training data has a large impact on the quality of the model. A common strategy to determine NNLM sampling weights for each application is to train individual n-gram LMs on each data source and choose relevance weights based on the optimal linear interpolation weights on a development set We propose a balancing scheme to derive sampling weights for I text sources that benefit both applications. We create a balanced development set containing approximately the same amount of VA and STT data. Let α 1 , . . . , α I ∈ [0, 1] be the sampling weights such that I i=1 α i = 1 and ρ(i) ∈ {D, A} indicating if the text source belongs to STT or VA. The redistribution probability masses β D and β A for STT and VA respectively are calculated to serve the joint application. These probability masses are determined by the optimal weights that minimize the perplexity of the linear Application-Specific (AS) language model combination on the balanced development set. The application-specific probability mass allocated by each application can be formalized as: Now consider the ratio between the redistribution and application-specific probability mass: These ratios determine the scaling of the original sampling weights to achieve balancing. Balanced sampling weights are then determined by a re-normalization of the scaled sampling weights: 4 Application-Agnostic and Application-Dependent FOFE NNLMs In this section three different types of NNLM architectures are introduced for on-device ASR. In the following let w N 1 := w 1 , . . . , w N be a word sequence. All NNLM architectures considered here follow a similar scheme. In each architecture a word embedding is followed by a FOFE layer The baseline FOFE NNLM shown in Figure Figure Figure A word-level NNLM holds the majority of parameters in the embedding. Therefore, the disk size for the mixture and AD-NNLM should increase slightly compared to the baseline architecture. Also the AD-NNLM speed should not increase since it is equivalent to the baseline architecture at inference time. The training data of our LMs consists of different data sources: anonymized and randomly sampled user requests from both VA and STT that are manually or automatically transcribed, along with synthetic tail-focused datasets. For the latter, we sample from domain-dependent templates and lists of entities that can fill those slots, both of which are derived from real user data. As mentioned in the introduction, we train NNLMs for three languages: US English, German and Mandarin Chinese. For our NNLMs, we obtain weights according to the method described in Section 3. For the AS models we sample 6B words while for the AA and AD models we sample 12B words. We run Bayesian hyperparameter optimization and select the final values based on optimal size-accuracy trade off. As a result, the models have a slightly different number of parameters, but we show in section 6 that this does not impact results noticeably. All models have 4 feed-forward layers and an embedding size of 256 -we tie the input and output embedding weights to reduce disk size We train our NNLMs with Block Momentum Stochastic Gradient Descent For evaluation, we test on three types of test sets: (1) VA and (2) STT, which consist of user requests sampled according to the distribution that we observe in our VA/STT and thus contain many head queries, and (3) Tail, which is designed to focus on queries with tail entities. Since these do not occur often in our user data, Tail consists of synthetic requests sampled from the same templates and entity lists that generate the synthetic training data. The requests cover a wide variety of domains such as music, sports and home automation and the audio is generated using Text-to-Speech. We evaluate the accuracy of our models using Word Error Rate (WER) and latency using P95 realtime factor (RTF). If y is the duration of the audio signal and x the time it takes to decode y, RTF is defined as x/y. P95 refers to the 95th percentile and thus captures the latency of the most difficult queries. We run each test three times and average the RTF numbers to capture outliers. The ASR system uses a deep convolutional neural network acoustic model (AM) as described in We first evaluate the accuracy of the different neural architectures. Table We first observe that moving from AS to AA FOFE and thus reducing the number of parameters by half gives in some cases 1.5-3.8% WER degradation. Secondly, even though the Transformer architectures have been optimized using Bayesian optimization similar to the FOFE-based models, they give mixed results. For English VA and STT we observe WER improvements while for all other setups we see large degradations. The on VA for all languages, while the AA Mixture FOFE gives the best accuracy on STT, but the differences between the two architectures are small. They outperform the baseline AS/AA FOFE and Transformer models in almost all cases. The only exception are the English and German Tail test sets: the AS FOFE models still achieve the best accuracy, probably because infrequent queries benefit the most from doubling the number of parameters. As explained in Section 5, we choose hyperparameters based on the optimal accuracy-size trade off. As a result, the number of parameters of the models at the top of Table We aim to develop a single NNLM that can serve both VA and STT requests with the same accuracy and speed as application-specific NNLMs, while reducing the disk size approximately by half. We develop a method to optimally balance the data of the VA and STT applications, and propose two novel FOFE feed-forward architectures. The Application-Agnostic Mixture FOFE and the Application-Dependent FOFE both outperform the baseline FOFE and Transformer models in terms of accuracy, and the latter is also competitive in terms of latency. | 943 | 2,635 | 943 |
Improving Passage Retrieval with Zero-Shot Question Generation | We propose a simple and effective re-ranking method for improving passage retrieval in open question answering. The re-ranker re-scores retrieved passages with a zero-shot question generation model, which uses a pre-trained language model to compute the probability of the input question conditioned on a retrieved passage. This approach can be applied on top of any retrieval method (e.g. neural or keywordbased), does not require any domain-or taskspecific training (and therefore is expected to generalize better to data distribution shifts), and provides rich cross-attention between query and passage (i.e. it must explain every token in the question). When evaluated on a number of open-domain retrieval datasets, our re-ranker improves strong unsupervised retrieval models by 6%-18% absolute and strong supervised models by up to 12% in terms of top-20 passage retrieval accuracy. We also obtain new stateof-the-art results on full open-domain question answering by simply adding the new re-ranker to existing models with no further changes. 1 | Text retrieval is a core sub-task in many NLP problems, for example, open-domain question answering where a document must be retrieved and then read to answer an input query. Queries and documents are typically embedded in a shared representation space to enable efficient search, before using a task-specific model to perform a deeper, tokenlevel document analysis (e.g. a document reader that selects an answer span). We show that adding a zero-shot re-ranker to the retrieval stage of such models leads to large gains in performance, by doing deep token-level analysis with no task-specific data or tuning. We focus on open-domain question answering and introduce a re-ranker based on zero-shot question generation with a pre-trained language model. Our re-ranker, which we call Unsupervised Passage Re-ranker (UPR), re-scores the retrieved passages by computing the likelihood of the input question conditioned on a retrieved passage. In part, UPR is inspired by the traditional models of query scoring with count-based language models Comprehensive experiments across a wide range of datasets, retrievers, and PLMs highlight the strengths of UPR: • By re-ranking the top-1000 passages from • On the open-domain QA task, just by performing inference with the re-ranked passages and a pretrained reader, we obtain improvements of up to 3 EM points on three benchmark datasets. To the best of our knowledge, this is the first work to show that a fully unsupervised pipeline (consisting of a retriever and re-ranker) can greatly outperform supervised dense retrieval models like DPR | An open-domain QA system consists of a retriever and a reader component. The reader attends to the retrieved passages to produce a final answer to the question. We use the Fusion-in-Decoder (FiD; Izacard and Grave (2021b)) model as the reader. In FiD, each retrieved passage is concatenated with the question and is then passed as an input to the T5 encoder We train the FiD reader using standard negative log-likelihood loss and teacher-forcing to generate an answer autoregressively. To understand the effect of UPR on answer generation, we then do inference with the previously trained reader and the re-ranked passages for each question. Let D = {d 1 , . . . , d M } be a collection of evidence documents. Given a question (q), the retriever selects a subset of relevant passages Z ⊂ D, one or more of which will ideally contain the answer to q. Our method will work with passages obtained from any retriever -either based on sparse representations like BM25 or dense representations like DPR. We only assume that the retriever provides the K most relevant passages. We denote this set of top-K passages as Z = {z 1 , . . . , z K }. Given the top-K retrieved passages, the goal of the re-ranker is to reorder them such that a passage with the correct answer is ranked as highly as possible. The ordering is computed with a relevance score p(z i | q) for each passage z i ∈ Z. Our re-ranking approach is unsupervised, i.e., it does not use any task-specific training examples. We refer to it as UPR, for Unsupervised Passage Re-ranking. UPR uses a pre-trained language model to score the probability of generating the question q given the passage text z, as described below. The question generation model is zero-shot, allowing for dataset-independent re-ranking, and also incorporates cross-attention between the question and passage tokens while forcing the model to explain every token in the input question. UPR is, therefore, more expressive than using dense retrievers alone, even if both methods fundamentally build on top of the same (or very similar) pre-trained models. More specifically, we estimate p(z i | q) by computing the likelihood of question generation conditioned on the passage, i.e., the quantity p(q | z i ). This also naturally emerges when applying Bayes' rule to p(z i | q) as log p(z i | q) = log p(q | z i ) + log p(z i ) + c , where p(z i ) is the prior on the retrieved passage and c is a common constant for all z i . As a simplifying assumption, we assume that the passage prior log p(z i ) is uniform, and can be ignored for re-ranking. With this, the above expression reduces to We estimate log p(q | z i ) using a pre-trained language model (PLM) to compute the average loglikelihood of the question tokens conditioned on the passage: where Θ denotes the parameters of the PLM and |q| denotes the number of question tokens. We apply the PLM in a zero-shot fashion with no finetuning by simply appending the natural language instruction "Please write a question based on this passage" to the passage tokens as shown in Figure The initial passage ordering is then sorted based on log p(q | z). This enables us to re-rank the passages by just performing inference using off-theshelf language models avoiding the need to label question-passage pairs for finetuning. Because the question generation model is applied zero-shot, this overall approach can be applied to improve the retrieval accuracy of any test collection, with no dataset-specific models or tuning data. In this section, we describe the datasets, unsupervised and supervised retrievers, and language models used for our passage re-ranking experiments. Following previous work on passage retrieval, we use the popular datasets of SQuAD-Open Evidence Passages D. We use the preprocessed English Wikipedia dump from December 2018 as released by To examine the robustness of UPR to keywordcentric datasets, we experiment with test collections where dense retrievers struggle and when the questions are from different domains. Entity Questions contains 22K short questions about named entities based on facts from Wikipedia. Previous work on this dataset has shown that dense retrievers struggle to retrieve relevant passages while sparse approaches like BM25 are more successful BEIR Benchmark is a test suite for benchmarking retrieval algorithms and consists of multiple datasets, where each dataset consists of test set queries, evidence documents, and relevance document annotations In our re-ranking experiments, we retrieve passages from both unsupervised and supervised retrievers, as detailed below. BM25 ranks based on the term-frequency and inverse document frequency of the keywords present in the question and passage MSS is a dense retriever trained by predicting masked salient spans like named entities with the help of a reader network Contriever uses momentum contrastive training to learn dense retrievers from text paragraphs DPR uses annotated question-context paragraphs and hard negative examples to train a supervised dense retriever MSS-DPR further improves DPR performance by first pre-training the dense retriever using MSS followed by DPR-style supervised finetuning We use a range of pre-trained models for computing our re-ranking relevance scores. T5 Series These models consist of encoder and decoder transformers pre-trained by denoising input text sequences. We experiment with the T5 model GPT These consist of a transformer decoder trained with the autoregressive language modeling objective. We use the GPT-neo model with 2.7B parameters We run all the experiments on a cluster with V100-32GB GPUs. We use PyTorch For the dense retriever experiments, we use the base configuration, which consists of 12 attention heads, 12 layers, and 768 model dimensions. To experiment with supervised retrievers, we train DPR and MSS-DPR for 3 epochs on SQuAD-Open, 40 epochs on NQ and TriviaQA, and 20 epochs on WebQ. We evaluate the performance of our proposed Unsupervised Passage Re-ranker (UPR), conduct ablations to better understand the approach, evaluate robustness on challenging test collections, and discuss run-time efficiency. Our goal is to improve the rankings of top-{20, 100} passages. Hence, in the first stage, a larger candidate list is fetched by retrieving the top-1000 passages. Then, in the second stage, these passages are re-ranked with the T0-3B PLM unless specified otherwise. To evaluate UPR performance, we compute the conventional top-K retrieval accuracy metric. It is defined as the fraction of questions for which at least one passage within the top-K passages contains a span that matches the humanannotated answer to the question. We experiment with the four datasets and five retrievers as introduced in §3.1 and §3.3, respectively and perform re-ranking with the T0-3B model. Table 2 reports the top-20 and top-100 retrieval accuracy before and after re-ranking. UPR provides consistent improvements across all the retrievers and datasets, improving unsupervised models by 6%-18% absolute and supervised models by up to 12% in top-20 accuracy. Re-ranked Contriever outperforms DPR by an average of 7% in top-20 and 4% in top-100 when considering all the datasets. This shows that a fully unsupervised pipeline of a retriever and reranker can outperform strong supervised models like DPR. Sparse representations still remain competitive, with BM25 outperforming Contriever and MSS on SQuAD-Open and TriviaQA re-ranking. We also see that re-ranked MSS-DPR comes close to or matches the performance of state-ofthe-art supervised retrievers (last row in Table to-end training of the retriever and language model, they are memory-intensive and too expensive to train for very large models. As such, UPR offers a viable alternative to expensive joint training. Intuition behind the performance gains obtained by UPR The question generation step in the re-ranker involves expressive cross-attention with the passage tokens. As a result, each question token attends to all the passage tokens in each decoder layer before predicting the next question token. This results in an accurate estimation of the relevance (or log-likelihood) scores than the original retriever scores, thus leading to an improved retrieval accuracy after re-ranking. This reasoning is further corroborated by our error analysis in Appendix A.3, where we present several examples where UPR improves over the incorrect BM25 retrievals. To understand the importance of re-ranking based on question generation p(q | z), we compare it with another unsupervised approach where reranking is based on passage generation conditioned on the question p(z | q). This quantity can be estimated by computing the average log-likelihood of generating the passage tokens using PLM and Natural Questions (dev) Figure teacher-forcing as where Θ denotes the parameters of the PLM and |z| denotes the number of passage tokens. For this analysis, we work with the NQ development set and obtain the union of top-1000 passages from the BM25 and MSS retrievers. These passages are re-ranked with two PLMs: T0-3B and GPT-2.7B. Our results in Figure To understand how much the choice of PLM contributes to top-K accuracy, we compare the performance of T5 (3B), T5-lm-adapt (different sizes), T0-{3B, 11B}, and GPT-neo (2.7 B) (as introduced in §3.4) on the NQ development set. We obtain the union of top-1000 passages retrieved from BM25 and MSS and then re-rank them with UPR. Results in Table When comparing across PLMs, we see that the performance of T5 suffers especially on top-{1, 5} accuracy levels. This might be because it was trained to predict corrupted spans, which is not ideal for text generation. On the other hand, autoregressive PLMs such as GPT-neo and T5-lmadapt tend to be better re-rankers. Furthermore, T0 obtains large improvements on top-{1, 5, 20}, demonstrating that finetuning with instructions on unrelated tasks is also beneficial for re-ranking. We study the effect of the number of passage candidates to be re-ranked on the retrieval performance along with the time taken. For this, we consider the NQ development set, re-rank up to top-1000 passages obtained from BM25, and use top-20 accuracy as the evaluation criteria. Results in Figure To gain a better understanding of the relative strengths of UPR and supervised (or finetuned) re-rankers, we perform zero-shot supervised transfer experiments and compare the results with UPR. We adopt the training method of We use the open-source checkpoints of monoT5 to re-rank the top-1000 passages retrieved by BM25 and report results on the NQ development set (Table We re-rank the top-1000 passages from every retriever with UPR. As the training set is not pro-Retriever BEIR nDCG@10 Recall@100 Baselines BERT We re-rank the top-1000 documents from Contriever and BM25 with the T0-3B PLM. Following convention, we report the macro average of NDCG@10 and Recall@100 metrics in Table Results demonstrate the effectiveness of UPR as NDCG@10 scores improve by 3-8% absolute and Recall@100 improves by 5-6%. We include performance numbers on individual datasets with finegrained analysis in Appendix A.4. Finally, we show that UPR improves the performance of full open-domain QA systems. For training FiD models, we use the top-100 retrieved passages and a batch size of 64. Detailed training hyperparameters are provided in Appendix A.1. During inference, an answer is generated using greedy decoding. For our experiments, we train the FiD base and large models using the retrieved documents from MSS, DPR, and MSS-DPR retrievers. We re-rank the top-1000 passages with UPR using the T0-3B PLM and then perform inference with the top-100 re-ranked passages. We conduct experiments on SQuAD-Open, TriviaQA, and NQ datasets and report the exact match (EM) scores for evaluation. We employ the same set of evidence passages for all the datasets. 7 7 Previous work has often used the 2016 Wikipedia dump as evidence for SQuAD-Open. As our evidence set is larger and newer, some questions may be unanswerable, which renders a Results are presented in Table Our work is based on re-ranking passages for opendomain retrieval using pre-trained language models (PLMs) which we have covered in earlier sections. Here, we instead focus on covering previous work related to generative pre-training, query likelihood for document ranking, and open-domain QA. Recently, there has been an increased adoption of the generative pre-trained transformer (GPT) sefair comparison difficult. However, to alleviate dataset-specific design choices, we adopt a common experimental setup. ries of models by the NLP community In information retrieval, an appealing approach to rank documents is by utilizing language models to compute relevance scores for a query Open-Domain QA involves producing answers to information-seeking questions from large document collections. Typical approaches consist of retriever and reader networks, where the retriever identifies a small number of documents to aid the reader in producing answers In this work, we propose UPR, an approach to perform unsupervised passage re-ranking for opendomain retrieval. To re-rank, UPR computes a relevance score for question generation conditioned on each retrieved passage using pre-trained language models. Extensive experiments across a wide range of QA datasets show that an unsupervised pipeline consisting of retriever and UPR greatly outperforms strong supervised retriever models. In addition, UPR further improves the performance of supervised retrievers. On the open-domain QA task, by just performing inference using re-ranked passages and a pre-trained reader model, we achieve new state-of-the-art results. UPR presents several interesting directions for future work. First, its applications to other retrieval tasks such as improving source-code retrieval based on textual queries can be explored. Second, another promising direction would be to tune instructions according to the nature of the retrieval tasks. For instance, when retrieving similar sentences in the BEIR benchmark, variations of the instruction prompt used in UPR can be explored. Finally, it would also be interesting to investigate the extent to which specialized language models such as the ones finetuned to generate questions using passagequestions data would further help in improving retrieval. task, PLMs trained on in-domain text The experiments conducted in the paper demonstrate the usefulness of large language models for information retrieval tasks when using English Wikipedia as the evidence source. However, when deployed in production, our work shares the typical ethical risks associated with large language models. There are chances that the re-ranked results may not be fair to all communities. This can potentially lead to an increased discrimination and exclusion of marginalized groups. These risks can also perpetuate to question-answering applications such as generating toxic or fake text as answers. Therefore, care should be taken before deploying our approach in real-world or customer facing applications; it is advisable to conduct tests and benchmark the models covering these aspects. • A link to a downloadable source code, with specification of all dependencies, including external libraries: We are submitting the source codes as a zip file. • The average runtime for each model or algorithm (e.g., training, inference, etc.), or estimated energy cost: We discuss the average runtime of performing inference with UPR in Sec. 4.2.3. However, we want to highlight that our codes were not carefully optimized to minimize runtime or to make optimal use of the hardware resources. • Number of parameters in each model: We provide these details in Sec. 3.4 and Table • Corresponding validation performance for each reported test result: The re-ranking experiments does not require validation set for model selection, as we only perform inference for each query using the language model and retrieved passages. If the program committee or reviewers require the validation set performance, we will include it in the Appendix in the final version of the paper. Our ablations and analysis are conducted on the validation set of datasets. For the open-domain QA experiments, we also report the performance on the validation set. • Explanation of evaluation metrics used, with links to code: Our evaluation metrics are standard and widely used by the community. We provide their details in the main paper in Sec. 4. The code is submitted with the paper. B.2 For all results involving multiple experiments, such as hyperparameter search • The exact number of training and evaluation runs: We provide training details for all models in Sec. 3.5. • Hyperparameter configurations for bestperforming models: We provide the hyperparameter settings in Appendix A.1. • Number of hyperparameter search trials: maximum 5. • The method of choosing hyperparameter values (e.g., uniform sampling, manual tuning, etc.) and the criterion used to select among them (e.g., accuracy): For the open-domain QA experiments, we performed manual hyperparameter tuning. We selected the best hyperparameter using EM results on the validation set. • Summary statistics of the results (e.g. mean, variance, error bars, etc.): The re-ranking experiments are based on performing inference using open-source PLMs using a single prompt. As such, these summary statistics are not applicable to UPR. The open-domain QA experiments are compute expensive utilizing a lot of CPU and GPUs resources and take time in the range of tens of hours. Therefore, due to computational and time constraints performing multiple runs for each experiment was not feasible. Therefore, we adopted the approach of using the same seed value (1234) for all the training runs. • Details of train/validation/test splits: We use the standard training / dev / test splits whose details are provided in Sec. 3.1 and Table • Relevant statistics such as number of examples and label distributions: We provide dataset statistics details in Table • An explanation of any data that were excluded, and all pre-processing steps: We include the relevant details in Sec. 3. • For natural language data, the name of the language(s): Our datasets are in English language. • A zip file containing data or link to a downloadable version of the data: All the datasets used in this work are open-source available and widely used by the community. Please refer to the respective dataset papers for the download links. • For new data collected, a complete description of the data collection process, such as instructions to annotators and methods for quality control: This is not applicable to this work. Table | 1,050 | 1,583 | 1,050 |
Tchebycheff Procedure for Multi-task Text Classification | Multi-task Learning methods have achieved significant progress in text classification. However, existing methods assume that multi-task text classification problems are convex multiobjective optimization problems, which is unrealistic in real-world applications. To address this issue, this paper presents a novel Tchebycheff procedure to optimize the multitask classification problems without any convex assumption. The extensive experiments back up our theoretical analysis and validate the superiority of our proposals. | Multi-task Learning (MTL) aims to learn multiple related tasks simultaneously, and obtain better performance than learning each task independently by setting inductive bias across tasks. Existing MTL methods for text classification, usually set up the inductive bias across tasks by designing a parameterized hypothesis class that shares some parameters across tasks (e.g. shares some hidden layers in a Neural Network), and cast the multi-task text classification problem as a multiobjective optimization problem. L 1 -metric method is one of the most popular strategies for solving the multi-objective optimization problem. Specifically, it learns the parameters by minimizing a weighted linear combination of per-task losses. And this method is able to find an arbitrary Pareto optimal solution in the Pareto set if the problem is convex. Unfortunately, for a non-convex problem, this *Corresponding author. method excludes many Pareto optimal solutions from its search scope. To illustrate the issue, it is instructive to consider a 2-tasks learning case shown as Figure To address the non-convexity problems, this paper proposes a novel Tchebycheff procedure to improve the performance of multi-task text classification. To validate the superiority of the proposed method, we conduct the experiments on two classical text classification problems: sentiment analysis on reviews | The family of Pareto optimality methods, including L 1 -metric methods (weighted sum methods) To handle the non-convex case, MGDA leverages the Karush-Kuhn-Tucker conditions and provides Pareto stationary points as solutions. However, the solutions are not sufficient to be Pareto optimal. A novel MTL method, which can achieve Pareto optimal without any convex assumption, is necessary to compensate for disadvantages in the L 1 -metric and MGDA. In this paper, a novel Tchebycheff procedure is proposed to achieve Pareto optimal without any convex assumption. Consider a multi-task learning problem with T tasks over an input space X and a collection of task spaces {Y t } T t=1 . There is also a parametric hypothesis h = {f t } T t=1 • g = {f t (g(x, θ sh ), θ t )} T t=1 : X → {Y t } T t=1 for each task, where θ sh represents the parameters shared between tasks, θ t represents the task-specific parameters, g(•, θ sh ) : X → R K is the feature map used across different tasks. K is the dimension of the representation space. The functions g(•, θ sh ) : X → R K and f t (•, θ t ) : X → Y t are chosen from respective hypothesis classes G and F. h is in hypothesis classes H. The choice of representation and specialized predictors is based on the data observed for all the tasks. The data takes the form of a multisample Correspondingly, the empirical loss of the task t is defined as i , θ sh ), θ t ), y t i ) . We also denote the transpose of the vector/matrix by superscript , the logarithms to base 2 by log. MTL can be formulated as a multi-objective optimization problem that optimizes a collection of possibly conflicting objectives where L(θ sh ; θ 1 , ..., θ T )=( L1 (θ sh , θ 1 ), ..., LT (θ sh , θ T )) . The goal of multi-objective optimization is to achieve the (weak) Pareto optimality. Definition 1 (Pareto optimality for MTL). The Pareto optimality for MTL is defined as: (i) A solution θ dominates a solution θ if Lt (θ sh , θ t ) ≤ Lt (θ sh , θ t ) for all tasks t and L(θ sh ; θ 1 , ..., θ t ) = L(θ sh ; θ 1 , ..., θ t ). (ii) A solution θ * is called Pareto optimal if there exists no solution θ that dominates θ * . Definition 2 (Weak Pareto optimality for MTL). A solution θ is weakly Pareto optimal if there does not exist another solution θ such that Lt (θ sh , θ t ) < Lt (θ sh , θ t ) for all tasks t. The set of (weak) Pareto optimal solutions are different trade-offs between tasks. The Pareto optimal set is a subset of the weakly Pareto optimal set. Global criterion is a standard technique for finding (weak) Pareto optimality, which optimizes all tasks together by minimizing a weighted L p -objective shown as (2). min where Non-convex Multi-objective Optimization: L ∞ metric can find every Pareto optimal solution without convex assumption. By contrast, the L 1 metric excludes some Pareto optimal solutions when the problem is non-convex In practice, most of the multi-task text classification problems are non-convex multi-objective problems, especially when the Deep Neural Network involved. According to the uniform convergence properties of MTL Weak Pareto optimality: The solution of a L ∞metric objective is weakly Pareto optimal. Figure Many multi-task neural network models can be used in multi-task text classification, such as hard parameter sharing networks Original hard parameter sharing network: A hard parameter sharing network learns multiple related tasks simultaneously by sharing the hidden layers across all tasks, while keeping task-specific output layers for each task shown as Figure The shared layers can be formulated by any feature extractor (e.g. long short-term memory (LSTM) Adversarial hard parameter sharing network: Cutting edge work where W ∈ R K×K and b ∈ R K . To boost the performance in non-convex problems, we use the Tchebycheff (L ∞ ) metric to formulate the optimization objective. The scales of empirical risks for different tasks can vary significantly. To normalize the scales, we divide each empirical risk in the MTL model with the empirical risk of learning the corresponding task independently, which typically have similar scale. That is, we define the weight w t in (2) as (4). where l t is the empirical risk of learning task t independently. In practice, we set l t to be the training loss of training task t independently and achieving the highest accuracy in verification. In the ERM (Empirical Risk Minimization) paradigm, it is reasonable to assume that the minimum empirical loss of each task equals 0. That is, l * t = 0 in (2). Further more, the empirical losses are non-negative. This paper present the Tchebycheff Loss for multi-task text classification as (5). t {w 1 L1 (θ sh , θ 1 ), ..., w T LT (θ sh , θ T )}, (5) where w t is defined in (4). Algorithm 2: Adv Tchebycheff Procedure Input: data D t = (X t , Y t ), the number of training epochs N e , α. Initialization: Train each task t independently, get l t (the loss corresponding to the highest verification accuracy) and initialize θ sh 0 with the hidden layers of task 1. Train the discriminator with θ sh i-1 and get The empirical loss of the discriminator can be formulated as (6). where 1 y i =t is the indicator function which equals to 1 when y i = t otherwise 0. In the adversarial MTL setting, we add the loss of the discriminator into the Tchebycheff loss. In the Tchebycheff procedure, we optimize θ sh with the discriminator when LD > α, where α is a hyper parameter. ( By minimizing the Tchebycheff loss (5) or (7), we can learn a (adversarial) hard parameter sharing network model. The training process of the model is defined as an (adversarial) Tchebycheff procedure, which is formulated as Algorithm 1 ( Algorithm 2 for the adversarial model). The networks are trained with backpropagation. In the adversarial Tchebycheff procedure, the dis-criminator is trained by using a gradient reversal layer The computational cost of training a hard parameter sharing network model with Tchebycheff procedure is higher than training it with a L 1 metric. The extra cost comes from the process of selecting the task with maximum loss. However, it can be easily reduced by parallelly computing loss of each task. In this section, firstly, we conduct a synthetic experiment to validate our theory analysis. Then, we perform experimental studies on two real-world applications: sentiment analysis and topic classification. The implementation is based on PyTorch In this section, two 2-objective optimization problems, problem 1 and 2 , are introduced to evaluate the performance of the L 1 metric method and the L ∞ metric method. Problem 1 is a convex 2objective optimization problem, while problem 2 is a non-convex 2-objective optimization problem. Let w 1 ∈ {0.01, 0.02, 0.03, ..., 0.99, 1} and w 2 = 1w 1 . We solve problem 1 by using the L 1 metric method (minimizing w 1 x 1 + w 2 x 2 ) and L ∞ metric method (minimizing max(w 1 x 1 , w 2 x 2 )) respectively. The results are shown in Figure Sentiment Analysis Topic Classification We implement our (adversarial) Tchebycheff Procedure via a deep MTL network with hard parameter sharing strategy In our experiments, TextCNN The extracted feature representations are then concatenated and classified by the task-specific output module, which has one fully-connected layer. The adversarial module is built with one fully connected layer whose output size equals to the number of the tasks. It is noteworthy that the adversarial module connects to the shared layers via a gradient reversal layer We train the deep MTL network model according to Algorithms 1 and 2 respectively. We set α be 2.5 and 1 for sentiment analysis and topic classification respectively. The learning rates are 1e -4 and 3e-4 for sentiment analysis and topic classification respectively. We use Adam optimizer We compare our proposed methods with baselines and some state-of-the-art methods: (i) Single Task: solving tasks independently, (ii) Uniform Scaling: minimizing a uniformly weighted sum of loss functions, (iii) MGDA: using the MGDA-UB method proposed by We report results over 10 runs by plotting classification accuracy of each classification task for sentiment analysis and topic classification in Figures 7 and 8 respectively. Figures Figure Figure To verify the convergence of the proposed (adversarial) Tchebycheff procedure, we plot curves of training loss for each task and discriminator in Figure We visualize the Tchebycheff procedure and adversarial Tchebycheff procedure with color maps as shown in Figures Figures In an adversarial Tchebycheff procedure, optimizing the adversarial task (4.5s per epoch for sentiment analysis and 2.1s per epoch for topic classification) is more time-consuming than optimizing a single task (3.5s per epoch for sentiment analysis and 1.5s per epoch for topic classification). However, optimizing the adversarial module appears less than 100 epochs. The extra computational cost resulted from the adversarial training can be ignored. We are able to accelerate the (adversarial) Tchebycheff procedure with Multi-processing. In Multi-processing (adversarial) Tchebycheff procedure, we accelerate the procedure of selecting the task by computing the loss of each task in different processes. We implement the code by using the multiprocessing package in PyTorch. From Table Most of multi-task text classification problems are non-convex multi-objective optimization problems. However, existing methods ignore the nonconvexity and solve the problems using convex optimization methods. To address this issue, this paper presents an (adversarial) Tchebycheff procedure for multi-task text classification without any convex assumption. Numerical experiments show that our proposed methods can converge and outperform state-of-the-art methods. In the Tchebycheff Procedure, we choose the weight for each task according to the empirical risk of learning the corresponding task independently. Obtaining the empirical risk is a little laborious. In the future, it would be fruitful to develop a novel weighting strategy for the Tchebycheff Procedure. | 522 | 1,382 | 522 |
Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer in Prompt Tuning | In real-world scenarios, labeled samples for dialogue summarization are usually limited (i.e., few-shot) due to high annotation costs for high-quality dialogue summaries. To efficiently learn from few-shot samples, previous works have utilized massive annotated data from other downstream tasks and then performed prompt transfer in prompt tuning so as to enable cross-task knowledge transfer. However, existing general-purpose prompt transfer techniques lack consideration for dialoguespecific information. In this paper, we focus on improving the prompt transfer from dialogue state tracking to dialogue summarization and propose Skeleton-Assisted Prompt Transfer (SAPT), which leverages skeleton generation as extra supervision that functions as a medium connecting the distinct source and target task and resulting in the model's better consumption of dialogue state information. To automatically extract dialogue skeletons as supervised training data for skeleton generation, we design a novel approach with perturbationbased probes requiring neither annotation effort nor domain knowledge. Training the model on such skeletons can also help preserve model capability during prompt transfer. Our method significantly outperforms existing baselines. Indepth analyses demonstrate the effectiveness of our method in facilitating cross-task knowledge transfer in few-shot dialogue summarization. | Automatic text summarization | The user asks for the address, postcode and phone number of Oriental House. The restaurant is in the east and the food is expensive. Few-Shot Figure days In existing works, one common way to tackle the data scarcity problem is to perform transfer learning by leveraging off-the-shelf out-of-domain or out-of-task supervised data Among recent transfer learning techniques, prompt transfer How to improve prompt transfer in a taskspecific manner? The existing general-purpose prompt transfer technique In this paper, we propose a dialogue-specific prompt transfer technique, named Skeleton-Assisted Prompt Transfer (SAPT). SAPT provides the model with extra supervision during its prompt transfer by training it to perform skeleton gener-ation along the way. This extra supervision can essentially function as an intermediate task-specific medium that is beneficial for the knowledge transfer between the distinct source and target task. To get the supervised training data for skeleton generation, we design a novel automatic skeleton extraction approach that requires neither annotation effort nor domain knowledge. Specifically, we observe the model's output variation to perturbation-based probes and extract the dialogue turns to which the model displays the highest sensitivity as skeletons. Training the model on such skeletons can also help preserve model capability during prompt transfer. The idea behind this is that we try to prevent the model from forgetting the dialogue-state-related knowledge it has learned during its pretraining on supervised DST data, since the model sensitivity to perturbation-based probes in the DST task intrinsically reflects the capability of processing dialogue state information it has developed. Experimental results and in-depth analyses with BART In summary, our main contributions are: • We focus on improving the prompt transfer in prompt tuning from dialogue state tracking to few-shot dialogue summarization. To the best of our knowledge, SAPT is the first effective dialogue-specific prompt transfer technique. • By training the model to perform skeleton generation during prompt transfer, SAPT provides extra supervision that essentially functions as an intermediate task-specific medium between the distinct source and target task, allowing the model to better consume the dialogue state information from the source task. • To preserve model capability during prompt transfer, we design a novel approach that employs perturbation-based probes to automatically extract dialogue skeletons as supervised training data for skeleton generation, requiring neither annotation effort nor domain knowledge. Abstractive dialogue summarization is typically formulated as a sequence-to-sequence problem To mitigate the data scarcity problem, it is common to turn to transfer learning by leveraging massive supervised data from other related domains or tasks that could potentially provide useful knowledge. Dialogue state tracking (DST), a related task to dialogue summarization, aims to correctly infer the speaker's goal in the form of semantic slot-value pairs Although the DST task is traditionally formulated as a classification problem, recent work Among recent transfer learning techniques, prompt transfer Prompt tuning is a new paradigm of utilizing PLMs for downstream tasks. It is motivated by the intuition that PLMs can be steered with a proper context, without the need for any model parameter updates. In prompt tuning, a sequence of continuous trainable embeddings called "soft prompt", parameterized by ϕ, is prepended to the input sequence. During training, all parameters of the PLM (θ) are frozen, but unlike prompt design Prompt transfer is a simple yet effective transfer learning technique designed for prompt tuning. The soft prompt is first trained in the source task and then used as parameter initialization for the prompt tuning in the target task. Prompt transfer inherits the advantage of prompt tuning in terms of parameter efficiency, as its transfer learning process likewise relies merely on the lightweight soft prompt. 3 Method: Skeleton-Assisted Prompt Transfer (SAPT) The existing non-task-specific general-purpose prompt transfer technique a textual similarity metric Sim(•, •) (higher means more similar). Output: a collection of dialogue skeletons: for j = 1, 2, . . . , p i do 5: add s i to S 16: return S the form of extra task supervision separately incorporated into both the source and target task supervision, since in this way the updated source and target task have more overlap and get semantically closer to each other. Also, as the model capability of processing source task data is closely associated with the knowledge it has gained during the source task pretraining, it needs to be effectively preserved during the prompt transfer to facilitate the target task. Nonetheless, the capability per se is admittedly a bit abstract and thus hard to concretely model in practice. Inspired by recent advances in interpretable NLP, we argue that the model sensitivity to perturbation-based probes should arguably be a concretization of model capability To these ends, we propose Skeleton-Assisted Prompt Transfer (SAPT), a dialogue-specific prompt transfer technique. SAPT provides the model with extra supervision during its prompt transfer by training it to perform skeleton generation along the way (detailed in subsection 3.1). This extra supervision (i.e. skeleton generation) is separately incorporated into both the source and target task supervision, and thus can essentially function as an intermediate task-specific medium (because of the increased overlap between the updated source and target task) that is beneficial for the cross-task knowledge transfer. To get the supervised training data for skeleton generation, we design a novel automatic skeleton extraction approach that requires neither annotation effort nor domain knowledge (detailed in subsection 3.2). Specifically, we observe the model's output variation to perturbation-based probes and extract the dialogue turns to which the model displays the highest sensitivity as skeletons. Training the model on such skeletons can also help preserve model capability during prompt transfer. This is because those skeletons (extracted with perturbationbased probes) embody the model sensitivity to perturbation-based probes which is a concretization of model capability. On the whole, SAPT creates an intermediate task-specific medium using skeleton generation as extra supervision ( §3.1), and preserves model capability during prompt transfer by training the model on the skeletons extracted with perturbation-based probes ( §3.2). As a result, the distinct source and target task is able to be better connected because they have got semantically closer to each other, and the target task is able to be facilitated because the model has been discouraged from forgetting the knowledge it has gained during the source task pretraining. §3.3 describes SAPT's overall workflow. In SAPT, the skeleton generation task is incorporated into the original task (either the source or the target task, or both) as extra supervision. We denote a supervised sample of the original task as (x, y), where x represents the dialogue history and y represents the original task supervision that could be either the sequence-to-sequence-based dialogue state ground-truth or the dialogue summary ground-truth. For each sample (x, y), We also have a dialogue skeleton, denoted as s, extracted from the dialogue history x (the skeleton extraction algorithm is detailed in subsection 3.2). Such a dialogue skeleton is essentially an ordered collection of dialogue turns. For instance, if a dialogue history x contains p dialogue turns, i.e. x = [t 1 , t 2 , . . . , t p ], its dialogue skeleton s will contain q dialogue turns (q ≤ p), denoted as s = [t s 1 , t s 2 , . . . , t s q ], and thus set(s) ⊆ set(x). The dialogue skeleton s is appended to the original task supervision y as extra supervision, and the model is trained to perform the original task and then skeleton generation. The new log-likelihood training objective is: We extract dialogue skeletons (used as supervised training data for skeleton generation in subsection 3.1) with perturbation-based probes. Given a dialogue in a collection of dialogues, x i ∈ X , we first construct the perturbation-based probes by deleting a dialogue turn from x i at a time. The resultant perturbation-based probes can be expressed as We then feed those perturbation-based probes individually into the trained source-task (DST) model, LM DST , and get the model output o ij corresponding to each deleted dialogue turn t ij . In the meantime, we also feed the whole dialogue history x i into LM DST and get the model output o i . Next, we compute the textual similarity score m ij between o i and o ij using a textual similarity metric Sim(•, •) (higher means more similar). We execute the aforementioned procedure for each dialogue in X . After that, we group together all the similarity scores we compute along the way and find the median of them. Finally, we extract those dialogue turns, whose corresponding similarity scores are less than the median, as the dialogue skeletons. Algorithm 1 presents the process of extracting a dialogue skeleton s i for each dialogue x i ∈ X . Built on top of SPOT 1. perform prompt tuning on the DST (source task) supervision; 2. perform prompt transfer from the previous step, and then perform prompt tuning on the DST (source task) & skeleton generation supervision; 3. perform prompt transfer from the previous step, and then perform prompt tuning on the (few-shot) dialogue summarization (target task) & skeleton generation supervision; 4. perform prompt transfer from the previous step, and then perform prompt tuning on the (few-shot) dialogue summarization (target task) supervision. , SAPT [DST] omits step #3 while SAPT [SUMM] omits step #2; SPOT To study the cross-task prompt transfer from dialogue state tracking (DST) to few-shot dialogue summarization, we perform experiments on a DST dataset: MultiWOZ 2.2 We use BART-large We use the widely-used ROUGE metrics PROMPT TUNING of them). To further evaluate the generated summaries, we perform a human evaluation via crowdsourcing. We randomly select 100 samples from TODSUM test set and run different models on them to generate summaries. We recruit human participants on Prolific 4 , a crowdsourcing platform, to rate the generated summaries (and also the ground-truth summaries) from 0 to 2 in terms of four evaluation metrics: informativeness, faithfulness, fluency, and redundancy 5 . Each summary instance is evaluated by 5 different human participants, and the interannotator agreement (IAA) score for each metric is 0.577, 0.635, 0.649, 0.591, with an average IAA of 0.613. Results shown by the average scores in Ta-4 ble 2 are consistent with the automatic evaluation results: all three SAPT variants outperform the baseline method SPOT, and SAPT[DST+SUMM] consistently performs the best across all metrics. Meanwhile, all generated summaries are deemed to be worse than the ground-truth summaries, meaning that there is still room for these summarization models to be improved. We also conduct a case study by ourselves, detailed in Appendix D. To fully investigate the effectiveness of SAPT, we study the impact of skeleton type, decoding order, and source & target task supervision. Table Source & Target Task Supervision. We remove all the original task supervision along the way. The observed performance drop is as expected, but the superior performance against SPOT demonstrates the benefit our skeletons bring for cross-task knowledge transfer. Parameter-Efficient Transfer Learning. To efficiently make use of pretrained language models (PLMs) We focus on improving the prompt transfer in prompt tuning from dialogue state tracking to few-shot dialogue summarization, and propose SAPT, a dialogue-specific prompt transfer technique, which uses skeleton generation as extra supervision by training the model on the dialogue skeletons extracted with perturbation-based probes. In this way, a beneficial intermediate task-specific medium is created between the source and target task, and the model capability is able to be better preserved during the prompt transfer, resulting in the model's better consumption of dialogue state information from the source task. Significantly stronger empirical performance and in-depth analyses on two dialogue summarization benchmarks demonstrate the effectiveness of our method in fewshot dialogue summarization. Despite the strong performance achieved by SAPT, we use the pre-trained language model (PLM) as the backbone of our method. Therefore, we cannot go beyond the limitation of the maximum sequence length of the PLM. In fact, long-form language understanding and generation have been widely acknowledged as an open research question that needs much further investigation, which is beyond the scope of our paper. All datasets used in this work are public. We did not collect any personal information from our human participants nor did we present them with any harmful model outputs. Our dialogue summarization models face the same potential pitfalls as other contemporary language learning systems do, e.g. being prone to echoing the biases present in the dataset We recruit 30 human participants on Prolific We measure the duration of the annotation processes for both dialogue state and dialogue sum-mary. The average duration of annotating a dialogue for its dialogue states is 1.3 minutes; the average duration of annotating a dialogue for its dialogue summary is 3.8 minutes, which is much longer. These results are in line with our intuition: the annotation of a dialogue summary requires not only tracking the dialogue states, but also having an utterance-level detailed understanding of the dialogue, because only after understanding the whole dialogue progression can annotators write a fluent and faithful summary. We use Hugging Face Transformers 7 All turns of the input dialogue are prepended with special tokens as speaker identifiers ([USER] or [SYSTEM]), and then concatenated into a single input sequence which is truncated to 1024 BPE tokens. We use the ROUGE-L F1 score as the textual similarity metric Sim(•, •) in Algorithm 1. The dialogue skeletons are appended to the groundtruth dialogue states (or summaries), and there is a special token [SEP] between the dialogue states (or summaries) and skeletons. Human participants are asked to read the summaries and give their ratings (0, 1, or 2) in terms of four evaluation metrics: • Informativeness examines whether the critical information in the dialogue is missed in the summary: ⋆ 0: lots of the critical information in the dialogue is missed; ⋆ 1: a small amount of the critical information in the dialogue is missed; ⋆ 2: no critical information in the dialogue is missed. 7 • Faithfulness examines whether the information presented in the summary is factually incorrect or unmentioned according to the dialogue: ⋆ 0: lots of the information presented in the summary is factually incorrect or unmentioned; ⋆ 1: a small amount of the information presented in the summary is factually incorrect or unmentioned; ⋆ 2: no information presented in the summary is factually incorrect or unmentioned. • Fluency examines whether the sentences in the summary are ungrammatical or ill-formed: ⋆ 0: lots of the sentences in the summary are ungrammatical or ill-formed; ⋆ 1: a small amount of the sentences in the summary are ungrammatical or illformed; ⋆ 2: no sentence in the summary is ungrammatical or ill-formed. • Redundancy examines whether the expressions of the summary can be simplified: ⋆ 0: lots of the expressions of the summary can be simplified; ⋆ 1: a small amount of the expressions of the summary can be simplified; ⋆ 2: no expression of the summary can be simplified. We present a case study in Table | 1,396 | 28 | 1,396 |
Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries | Despite end-to-end neural systems making significant progress in the last decade for taskoriented as well as chit-chat based dialogue systems, most dialogue systems rely on hybrid approaches which use a combination of rulebased, retrieval and generative approaches for generating a set of ranked responses. Such dialogue systems need to rely on a fallback mechanism to respond to out-of-domain or novel user queries which are not answerable within the scope of the dialogue system. While, dialogue systems today rely on static and unnatural responses like "I don't know the answer to that question" or "I'm not sure about that", we design a neural approach which generates responses which are contextually aware with the user query as well as say no to the user. Such customized responses provide paraphrasing ability and contextualization as well as improve the interaction with the user and reduce dialogue monotonicity. Our simple approach makes use of rules over dependency parses and a text-to-text transformer fine-tuned on synthetic data of question-response pairs generating highly relevant, grammatical as well as diverse questions. We perform automatic and manual evaluations to demonstrate the efficacy of the system. | In order to cater to the diversity of questions spanning across various domains, dialogue systems generally follow a hybrid architecture wherein an ensemble of individual response subsystems One approach to acknowledge such queries is to have a fallback mechanism with responses like "I don't know the answer to this question" or "I'm not sure how to answer that." However, such responses are static and unengaging and give an impression that the user's query has gone unacknowledged or is not understood by the system as shown in Figure Our fallback approach attempts to address these limitations by generating "don't-know" responses which are engaging and contextually closer with the user query. 1) Since there are no publicly available datasets to generate such contextualised responses, we synthetically generate (query, fallback response) pairs using a set of highly accurate handcrafted dependency patterns. 2) We then train a sequence-to-sequence model over synthetic and natural paraphrases of these queries. 3) Finally, we measure the grammaticality and relevance of our models using a crowd-sourced setting to assess the generation capability. We have released the code and training dataset used in our experiments publicly. | Improving the coverage to address out-of-domain queries is not a new problem in designing dialogue systems. The most popular approach has been via presenting the user with chit-chat responses. Other systems such as Blender We describe two approaches to generate such contextual don't-know responses. Inspired by previous approaches which use parse structures to generate questions To build this baseline generator, we utilize few dependency templates in the style of SynQG We incorporate a bit of paraphrasing by randomizing various prefixes like "I'm not sure whether", "I don't know if", etc. and randomly using named entities. We describe the high-level algorithm below and in Algorithm 1. Owing to the expected low coverage and scalability of the rule-based approach, we resort to take advantage of pre-trained neural architectures to attempt to create a sequence-to-sequence fallback responder. To incorporate noise and avoid the model to over-fit on the handcrafted transformations, we do not train the model directly on (query, don't-knowresponse) pairs generated from the previous section. From all possible questions of the Quora Questions Pairs dataset (QQP) After incorporating paraphrases from QQP, we are able to build a dataset of 100k pairs, which we call the "I Dont Know Dataset" (IDKD). After witnessing the success of text-to-text transformers, we use the pre-trained T5 transformer Most prior generated systems are evaluated on a range of automatic metrics like BLEU and ROGUE In addition, T5 responses on an average generate at least double the number of novel words than their dependency counterparts as shown in Table The T5 model helped to not only add paraphrastic variations but also scale to user queries outside of the scope of the dependency templates. More importantly, without losing the original ability of saying no, the model was able to generate more natural sounding dont-know-reponses by utilizing it's inherent world-knowledge acquired during pretraining. Table We describe two simple approaches which enhance user interaction to cater to the necessities of reallife dialogue systems which are generally a tapestry of multiple solitary subsystems. In order to avoid cascading errors from such systems, as well as refrain from answering out-of-domain and toxic queries it is but natural to have a fallback approach to say no. We argue that such a fallback approach could be contextualised to generate engaging responses by having multiple ways of saying no rather than a one common string for all approach. The appeal of our approach is the ease with which it can rightly fit within any larger dialogue design framework. Of course, this is not to deny that as we give more paraphrasing power to the fallback system, it would tend to retract from succinctly replying with a no -as is evident from the drop in the relevance scores. Nevertheless, we still believe that both our fallback approaches could serve as effective baselines for future work. | 1,228 | 1,235 | 1,228 |
Normalizing Mutual Information for Robust Adaptive Training for Translation | Despite the success of neural machine translation models, tensions between fluency of optimizing target language modeling and sourcefaithfulness remain as challenges. Previously, Conditional Bilingual Mutual Information (CBMI), a scoring metric for the importance of target sentences and tokens, was proposed to encourage fluent and faithful translations. The score is obtained by combining the probability from the translation model and the target language model, which is then used to assign different weights to losses from sentences and tokens. Meanwhile, we argue this metric is not properly normalized, for which we propose Normalized Pointwise Mutual Information (NPMI). NPMI utilizes an additional language model on source language to approximate the joint likelihood of source-target pair and the likelihood of the source, which is then used for normalizing the score. We showed that NPMI better captures the dependence between source-target and that NPMI-based token-level adaptive training brings improvements over baselines with empirical results from En-De, De-En, and En-Ro translation tasks. | Neural machine translation (NMT) models have achieved remarkable performance since Conditional Bilingual Mutual Information (CBMI), a metric for target tokens and sentences computed as the log quotient of the translation and the target-side language model probability, was While our proposed sentence-level NPMI assigns a large score (near the upper bound 1) to the faithful sourcetarget pair and a small score (near zero, which indicates neutrality) to a rather noisy pair, sentence-level CBMI scores for the two pairs are unable to achieve that. Note that the joint log likelihood values from the two pairs are comparable, while target lengths differ a lot. proposed by While CBMI score is devised to pursue this joint goal of translation, we argue that it does not take source context into account which leads to its failure to provide a reliable measure of relevance for some cases. For example, Figure We argue that this is because CBMI has a tendency to assign higher values for noisy or unlikely examples and vice versa due to the nature of pointwise mutual information (PMI) with an unbounded range. Inspired by normalized pointwise mutual infor-mation (NPMI), we propose to normalize by joint log likelihood (denoted as log P (src, tgt) in Figure Our method is validated with WMT and IWSLT benchmarks with diverse language pairs and shows consistent improvements in COMET | 6 Related work Given two random variables X and Y , the pointwise mutual information (PMI) between the observations x and y is which is not bounded below and has an upper bound oflog p(x, y). Token-level adaptive training, inspired by earlier approaches to fighting class imbalance problem in classification tasks, aims to assign static or dynamic weights to each of the tokens to further guide the translation model Token-level CBMI, which is used to determine weights of loss from each target token, is the PMI between the target token considered and the whole source sentence x, conditioned on the partially con-structed target prefix y <j . CBMI t (x; y j ) := PMI(x; y j | y <j ) (3) where p TM (j) = p(y j | x, y <j ; θ TM ) is the translation model's output probability on the j-th target token and similarly p tLM (j) = p(y j | y <j ; θ tLM ) is the target-side language model's prediction on the same token. For sentence-level scoring, token-level scoring is aggregated then normalized by the target sentence length |y| as follows: Note that unlike token-level CBMI defined simply as the PMI between the source sentence and the target token considered by equation ( Our critiques for their proposed normalization are as follows: • Source-agnostic: When the pair of the source and the target has relatively low (or high) likelihood, both token-and sentence-level CBMI scores can be over-(or under-) estimated as Figure • Mapping: λ and σ together determine how much the final weights of tokens with different CBMI scores will diverge, while the former is an empirically determined constant and the latter may vary across batches, or over time as training proceeds and the model output changes. 3 Proposed: Source-aware Normalization We first propose an alternative normalization, inspired by NPMI, then discuss how this score guides adaptive training. Our first contribution is to propose a better founded normalization used in -1 < NPMI(x; y) = PMI(x; y) This normalization bounds scoring within the range (-1, 1] and the sign of scoring can also be interpreted: 0 for independence, 1 for complete cooccurrence, and -1 for no co-occurrence. However, this requires the estimation of p(x, y), which we derive to obtain from the source-side language model (sLM) as below: In the same way, we can derive token-level NPMI: where q(j) := log p TM (j)log p tLM (j). While this derivation is more complex than that of CBMI, it can still be computed efficiently in one forward pass. With NPMI normalization bounding its range to ±1, we no longer require λ or σ for rescaling, but simply multiplying source-and token-level relative scores: where x + := max (x, 0), to honor the design of "positive" NPMI values, by selectively weighing pairs with cooccurrences, and µ is the average of positive NPMI values that helps center the weights at 1. The weight w j , relying on translation and language models themselves, is less reliable in earlier stages of training, when it can be better off resorting to unweighted loss. This estimation gradually gets better in later stages. We thus adopt dynamic smoothing, between weighting all tokens as 1, and by w j , where c increase over time during training. Compared to CBMI, which solved the same problem through training the translation and the language model for some steps with the unweighted negative log-likelihood loss then applying the weighting of tokens to the translation model afterward, we increased the value of c over training steps so that it exponentially approaches a targeted value. The former approach of skipping the weighting in the earlier stage of training can be viewed as setting c = 0 for some steps then fixing c = 1 for the rest. In contrast, by gradually increasing the ratio c, the model is allowed to be guided by the faithfulness measure relatively earlier in training, preventing it from being fully affected by detrimental training examples. We also note that mixing the unweighted and weighted loss with time-varying ratio c essentially has an effect of dynamically manipulating the scale hyperparameter λ in CBMI. We only use language models for assisting the translation model during the training, that is, at inference time only the translation model is used for decoding. We conducted experiments on three translation datasets, namely (1) WMT14 English-German (En-De) dataset which consists of approximately 4.5M training examples, (2) WMT16 English-Romanian (En-Ro) dataset comprising of about 610k examples, and (3) IWSLT14 De-En dataset for spoken language, which comes with 160k training examples. Following previous work, we used joined vocabulary of size 32k constructed using byte-pair encoding Table Here we present more detailed analyses of our method regarding how the different levels of weighting and dynamic weight smoothing over time affected the performance. All the results are from experiments conducted on IWSLT14 De-En and evaluated on the test set. First, we inspect the effects of token-and sentencelevel weighting separately. Next, we examine the effect of dynamic weight smoothing on CBMI and NPMI. Table Under encoder-decoder seq2seq framework, the decoder is responsible for both capturing the embedded content of the source sequence and generating a fluent target sequence that faithfully reflects the captured information. As an example of work tried to relieve this burden through the use of LM on the target language, Inspired by previous work in vision field, In this paper, we propose a source-aware metric for target tokens and sentences based on normalized pointwise mutual information (NPMI) that effectively captures the dependence between the source and the target for translation task. With this score, the model can figure out how much specific tokens require the source context for proper translation and how faithful a given source-target pair is, thereby putting more focus on examples with higher adequacy or importance. We also devise a new token-level adaptive training strategy based on NPMI score, which dynamically adjusts the participation of weighted loss over time to gracefully overcome the limitation of imprecise approximation of model output probabilities in the earlier training stage. Experimental results on translation benchmarks show that our proposed NPMI, combined with dynamic weight smoothing, performs well over various datasets and languages. We also validated through ablation experiments that our methods offer the best results when they are used together. We leave (1) the search for the best way of scheduling for weight smoothing and (2) leveraging powerful pretrained language models, rather than language models trained from scratch as future work. Assuring that the model pays more attention to the source sequence when necessary might not be enough for successful translation. Our model sometimes generated word-for-word translations, which are firmly rooted in the source sequence but not necessarily revealing the true meaning of the idiomatic phrase it contained. Hopefully, it might be alleviated given access to additional training data. Also, due to the additional source-side LM, our method requires additional GPU memory, which could be a burden. In the case of using joined vocabulary, this can be relieved via using a unified language model for both the source and the target language, with minimal performance degradation as described in 5.2. For IWSLT, we used transformer_iwslt_de_ en architecture, which has 6 layers in both the encoder and the decoder with embedding size 512, feed-forward network hidden dimension of 1024, 4 attention heads and applied dropout rate of 0.3. Lastly, we used batch size of 4k. For the others, we used transformer (base) architecture, which has 6 layers, embedding size of 512, feed-forward network hidden dimension of 2048 and 8 attention heads. Dropout rate of 0.1 was used for WMT14 En-De while 0.3 was used for WMT16 En-Ro. Following the original settings from We applied compound split to compute the (tokenized) BLUE scores for reporting performance on the test sets, and used detokenized BLEU scores for validation and choosing the best checkpoints. We used the average checkpoint over the 5 last checkpoints for WMT14 En-De and the 5 best checkpoints for others for evaluation. Following legacy settings, beam search was adopted as the decoding strategy with the beam size of 4 along with length penalty of 0.6 for WMT14 En-De, and beam size of 5 and length penalty of 1.0 for others. For training CBMI models, we used the same scale hyperparameters λ s = 0.3 and λ t = 0.1 as suggested by For weight smoothing, we used the following formula to increase the value of c towards a fixed targeted value c 0 exponentially: where t is the training step. We set c 0 as 0.3 for WMT14 En-De and 0.6 for the others. Then, we searched for the value of r and τ which basically determines how fast we want the weighted loss to participate in training, especially in earlier stage. Since increasing r and lowering τ have the same effect and vice versa, we fixed r = 0.99 and searched for τ . The values chosen were τ = 4000 for WMT14 En-De, τ = 400 for WMT16 En-Ro, and τ = 800 for IWSLT14 De-En. As training proceeds, translation and language models produce better approximations for the probabilities of unknown true data distribution. We empirically observed that the average NPMI values determined by the model (for the training samples) increase over time. Similarly, for the examples in the validation set, mean NPMI values tend to increase then saturate or start to decrease over time, which we believe to be another signal indicating overfitting other than the rebound in validation loss. The peak mean sentence-level NPMI values on the validation set for IWSLT14 De-En was approximately 0.44. This behavior was consistent among different settings for scheduling the c value, and the peak value did not tend to fluctuate a lot. This can be viewed as models with slightly different configurations reach a sort of consensus on how faithful examples a given dataset provides are, which implies that although we are using relatively smaller models trained from scratch on a smaller dataset, the estimated probabilities are quite reliable and that our proposed NPMI has potential as a metric for evaluating source-target faithfulness to be used for purposes other than token-level adaptive training. Here we repeat the proof from | 1,106 | 1,380 | 1,106 |
COLT5: Faster Long-Range Transformers with Conditional Computation | Many natural language processing tasks benefit from long inputs, but processing long documents with Transformers is expensive --not only due to quadratic attention complexity but also from applying feedforward and projection layers to every token. However, not all tokens are equally important, especially for longer documents. We propose COLT5, a long-input Transformer model that builds on this intuition by employing conditional computation, devoting more resources to important tokens in both feedforward and attention layers. We show that COLT5 achieves stronger performance than LONGT5 with much faster training and inference, achieving SOTA on the long-input SCROLLS benchmark. Moreover, COLT5 can effectively and tractably make use of extremely long inputs, showing strong gains up to 64k input length. | Many natural language processing tasks, such as summarization Over the past few years, many "efficient Transformer" approaches have been proposed that reduce the cost of the attention mechanism over long inputs This paper presents COLT5 (Conditional LongT5), a new family of models that, building on top of LONGT5 In particular, COLT5 divides each feedforward layer and each attention layer into a light branch Figure which is applied to all tokens and a heavy branch which is applied to a set of important tokens, selected specifically for that input and component. The light feedforward branch has lower hidden dimension than standard LONGT5 while the heavy feedforward branch has higher hidden dimension. The light attention branch has fewer heads and applies only local attention, while the heavy attention branch performs full attention over another separately selected set of important tokens. Figure Finally, COLT5 also includes two other modifications to the LONGT5 architecture. COLT5 adds multi-query cross-attention We show that COLT5 performs much faster finetuning and inference with similar or better model quality, improving over LONGT5 on arXiv summarization | Transformer FLOPs COLT5 follows an extensive line of work in attempting to reduce the computational cost of Transformer models, particularly over long inputs. The computational burden of Transformer models has several distinct elements, and different approaches focus on reducing the cost of different components. For that reason, it is helpful to start by providing a breakdown of the computational cost of Transformer components. Table Vanilla self-attention The first challenge of applying a Transformer to a long input is that the FLOPs of the self-attention mechanism scales quadratically in the input length, becoming intractable for long inputs. A large body of work focuses on reducing self-attention cost, restricting attention between a subset of inputs Conditional computation After applying a sparse attention mechanism, the feedforward and attention projection layers account for the majority of the FLOPs. These costs scale with the length of the input, such that processing long inputs is still prohibitively expensive. A common approach to reduce the remaining cost is to employ some form of conditional computation, avoiding applying all model parameters to the entire input. CALM Device utilization FLOPs do not tell the whole story, as modeling choices can influence the effective speed of operations achieved by accelerators. For long text inputs, autoregressive decoder inference is very slow due to memory bandwidth constraints from repeatedly loading the long sequence of keys and values Training objectives T5 introduced the span corruption objective As discussed in the previous section, a large proportion of Transformer FLOPs arise from feedforward and projection layers that scale with the length of the input sequence. Therefore, LONGT5 training and inference on long documents remains expensive. COLT5 further reduces the cost of processing long documents through conditional computation, following the intuition that some tokens are more important and therefore benefit more than others from heavy computation. First, some types of tokens may inherently require less computation, such as filler words and punctuation. Second, especially in long documents, large parts of the input may not be relevant to the current question, task, or processing stage. The COLT5 conditional computation mechanism consists of three components: routing modules, conditional feedforward layers, and conditional attention layers. All tokens are processed by standard, lightweight attention and feedforward layers. Routing modules additionally select important tokens from an input at each attention or feedforward layer, and a heavy conditional layer applies additional computation to routed tokens. This section describes each component in detail. Figure Encoder Layer Flops Routing In order to separately select important tokens for each component in each layer, we need a learnable and tractable routing function. We follow the simple three-step mechanism from We select the top-k highest scoring inputs. In order to provide a learning signal to the scoring embedding, we make sure the contribution of the routed tokens to the layer update is scaled according to the routing score, as will be seen later. To provide a better distributed signal to all tokens, we also globally normalize the routing scores to sum up to the number of desired routed tokens using a generalized softmax, resulting in normalized scores si . Each COLT5 layer has three independent routers, one each for the feedforward layer, attention queries, and attention key-values. Conditional Feedforward Intuitively, some token representations may benefit from more processing than others. The COLT5 conditional feedforward layer applies an additional high-capacity feedforward layer to selected tokens. In particular, let X i be the model state of the ith token and si denote the normalized routing score (set to 0 for non-routed tokens). Then the feedforward update for COLT5 is given by The light and heavy feedforward branches differ only in their hidden dimension, with the light branch having smaller hidden dimension than the standard T5 feedforward layer and the heavy branch larger. Let n denote the number of input tokens, m the number of selected tokens, and r L and r H the ratios of light and heavy hidden dimension to standard T5 hidden dimension. Then the FLOPs of the COLT5 layer are given by We set the light and heavy ratios as r L = 1 2 and r H = 4, half and quadruple the standard T5 hidden dimension respectively. For our main experiments, a fraction 1 16 of tokens are routed to the Conditional Attention COLT5 conditional attention operates on the intuition that most tokens have simple, local interactions, but some tokens benefit from heavier processing and long-range interactions. The COLT5 conditional attention layer applies an additional high-capacity attention layer that attends from selected query tokens to selected key-value tokens. Let sq i denote the normalized routing query score for token i, and skv the keyvalue scores for all tokens (set to 0 if not routed). Then the attention update for COLT5 is given by The light and heavy branches differ in the number of heads and tokens attended to: the light branch has fewer heads and attends to a local context window, while the heavy branch has more heads and attends to all routed key-value tokens. Separately selecting query and key-value tokens also allows the model to differentiate between tokens that require additional information and those that possess such information. Figure We set the light and heavy head ratios as r L = 1 4 and r H = 3 4 , keeping the total number of heads across the light and heavy branches equal to standard T5 heads. For our main experiments a fraction with less than half projection FLOPs and order-ofmagnitude smaller quadratic length scaling compared to LONGT5. Table 2 Global projection and attention FLOPs rounded to readable fractions, exact values are 9 32 and 3 256 . Complexity assumes constant fraction of routed tokens; we show we can do better in practice for extremely long inputs. Conditional computation effectively reduces the computational cost of the encoder. However, for encoder-decoder models with long inputs the majority of inference time is spent in the decoder due to memory bandwidth constraints The UL2 pre-training objective (2) we evaluate COLT5 on extremely long inputs up to 64k tokens and compare scaling against LONGT5; (3) demonstrate COLT5's few-shot capability, investigating how performance changes as input length and number of shots increase, (4) perform a series of ablations to understand the effect of individual COLT5 components, and (5) investigate empirical routing patterns. The remainder of the section outlines our experimental setup, and then describes each of the experiments above. Configurations COLT5 is based on the T5.1.1 architecture We pre-train COLT5 for 1M steps on the C4 dataset Data We evaluate COLT5 on TriviaQA Figure We hypothesize that the advantage of COLT5 over LONGT5 strengthens with input length, as the fraction of important tokens decreases and COLT5 can route a greater proportion of important tokens to the heavy branch. Figure Models trained on the UL2 objective have shown strong few-shot in-context learning (ICL) capabilities Figure This section studies the effect of different choices in the COLT5 recipe. nent for COLT5 Base. Routing First, we note that static routing -evenly distributing routed tokens over the input --leads to massive drop in performance. The importance of routing provides evidence that the model learns to devote capacity to important tokens and the advantage of COLT5 is not merely a result of additional parameters. Sharing routing decisions for query and KV tokens should be compared with v=q, and leads to a modest reduction in quality and increase in speed. The optimal number of routed tokens represents a trade-off between improved performance and computational cost of applying heavier layers. Table Attention COLT5 relies on routing to identify not only tokens that can benefit from important information elsewhere in the input, but also which tokens contain such important information. We study whether COLT5 is successful in this task by comparing performance with two different attention settings --v=all, in which routed tokens attend to the entire input, and v=q, which uses equal number of routed keys and values as queries, rather than twice as many. COLT5 appears to occupy a sweet spot, as using fewer routed key-values modestly decreases performance at similar speed but attending to all inputs barely helps at sharply increased cost. Other We compare COLT5 to LONGT5 with multi-query cross-attention, confirming that LONGT5 indeed does not achieve an unexpected quality gain from MQA, and our conservative assumptions in Figures 2, 4 are valid. Next, we evaluate multi-head cross-attention for COLT5, finding that it leads to modestly improved COLT5 performance. However, as MHA exhibits orderof-magnitude slower inference, MQA is clearly favored. Finally, PEGASUS appears to fine-tune slightly better than UL2, though the difference is small and UL2 enables few-shot learning. It is interesting to ask whether COLT5 routed tokens line up with what we consider intuitively important tokens in each document. We investigate this question by studying routing patterns of a Large COLT5 model fine-tuned on TriviaQA. We divide tokens into three categories: (1) question tokens, (2) answer tokens, and (3) other tokens. Figure We propose COLT5, a new model for long-range inputs that employs conditional computation for higher quality and faster speed. COLT5 has light feedforward and attention layers that apply to the entire input, as well as heavy branches that are applied only to a subset of important tokens selected by a learned router. We show that COLT5 achieves stronger performance at any speed compared to LONGT5 on a variety of long-input datasets, and can effectively and efficiently make use of extremely long inputs up to 64k tokens. COLT5 applies conditional computation only in the encoder. Applying conditional computation in the decoder is more complicated; the routing method in COLT5 is not causal, so it isn't applicable when generating token by token. Since decoder-only models and applications with long outputs have become more popular recently, this is a strong limitation of the current approach. Although the routing method in COLT5 could potentially be applied to the input context in a decoder-only model, we didn't investigate this setup. COLT5 is specialized towards long sequences and has to be trained from scratch. For large-scale training and deployment, it is desirable to either train a single model that can handle both short and long sequences, or develop a long-input architecture that can be adapted from an existing large model. The same is true for tokens around the correct answer ("papageno" in this example). Question is heavily routed to the expensive alternative by last layers of the model. Earlier we showed that question and answer tokens are more likely to be selected, but separating routing decisions by layer reveals interesting patterns. At early layers question and answer tokens are only modestly more likely to be selected, with routing probability sharply increasing at later layers and peaking in the last layer. This makes intuitive sense: in early layers the model has not yet had the opportunity to identify which tokens and parts of the document are important. However, the increase is not monotonic and there is strong variation between layers. This variation may imply that different layers focus on different types of tokens, or that some routing components do not successfully learn to identify important tokens. To gain a better insight into this, Figure Correlation between routing processes. Table | 810 | 1,174 | 810 |
Automated Chess Commentator Powered by Neural Chess Engine | In this paper, we explore a new approach for automated chess commentary generation, which aims to generate chess commentary texts in different categories (e.g., description, comparison, planning, etc.). We introduce a neural chess engine into text generation models to help with encoding boards, predicting moves, and analyzing situations. By jointly training the neural chess engine and the generation models for different categories, the models become more effective. We conduct experiments on 5 categories in a benchmark Chess Commentary dataset and achieve inspiring results in both automatic and human evaluations. | With games exploding in popularity, the demand for Natural Language Generation (NLG) applications for games is growing rapidly. Related researches about generating real-time game reports It is common knowledge that professional game commentators are usually game players. And expert players can usually provide more thorough analysis than amateurs. Inspired by this, we argue that for chess commentary generation, the generation model needs to know how to think and play in order to provide better outputs. In this paper, we introduce a neural chess engine into our generation models. The chess engine is pre-trained by supervised expert games collected from FICS Database The contributions are summarized as follows: • To the best of our knowledge, we are the first to introduce a compatible neural chess engine to the chess comment generation models and jointly train them, which enables the generation models benefit a lot from internal representations of game playing and analysis. • On all the 5 categories in the Chess Commentary dataset, our proposed model performs significantly better than previous stateof-the-art models. • Our codes for models and data processing will be released on GitHub | The most relevant work is Data-to-text generation is a popular track in NLG researches. Recent researches are mainly about generating from structured data to biography The overview of our approach is shown in Figure In Figure Description Model: Descriptions about the current move intuitively depend on the move itself. However, playing the same move could have different motivations under different contexts. For example, e2e4 is the classic Queen Pawn Open-ing in a fresh start. But it can be forming a pawn defense structure in the middle of the game. Different from previous works for chess commentary generation Quality Model: Harsh et al. ( S , and the wining rate difference v (1) -v (0) as semantic contexts for the decoder. And to model the value of wining rate difference, we introduce a weight matrix W dif f to map the board state-value pair [E (0) ] to the same semantic space of the other contexts by Eq.2. Our quality model is formulated as Eq.3, where Y Qual is the target comment about quality. Comparison Model: Usually, there are more than 10 possible moves in a given board. But not all of them are worth considering. Planning Model: We can always find such scenes where commentators try to predict what will happen assuming they are playing the game. And then they give analysis according to their simulations. Our internal chess engine is able to simulate and predict the game in a similar way (selfplay). We realize our model for planning by imitating the human commentators' behavior. Predicted moves and boards are processed by our multi-choices encoder to tell the potential big moments in the future. And we use the multi-choices encoder f M CE to produce the semantic contexts for the decoder. The process to generate planning comment Y P lan is described in Eq.5. Contexts Model: To analyze the situation of the whole game, the model should know about not only the current, but also the future. And similar to the planning model, contexts model takes a series of long-term moves and boards produced by self-play predictions as inputs. In this way, the model comments the game in a god-like perspective. And the semantic contexts is also processed by the multi-choices encoder for generating con-texts comment Y Cont as Eq.6. Each of the above models has a decoder (the hexagon blocks in Figure We denote E ∈ IR n×d as a bunch of raw context vectors, where n is the number of such context vectors and d is the dimension of the vectors. Although the semantic contexts E for different generation models are different as described before, we regard all of the board states, wining rates, and move representations as general semantic contexts. And we use attention mechanism to calculate the attention weights a for vectors in E, where W is a transformation function for the attentional context vectors. The scores are further normalized by a softmax function to a by We compute weighted sum of E with a to produce the attentional context vector z for word decoding z = E a. (10) The internal chess engine is in charge of the mapping from board B to semantic representation E S , predicting possibility distribution D on valid moves, and evaluating the wining rate v for the players. In previous works Given the tuple of game replays (B, M, v ) where M is the corresponding move and v is the ground truth wining rate, we optimize the engine's policy, value function at the same time as Eq.11 shows. When the engine grows stronger, we let the engine produce data by itself in a self-play manner Apart from understanding the board B, commentators also need to know the semantics of the move M . Besides using the chess engine to produce board representations E S , the move encoders also prepare for move embeddings E M as attention contexts for the text decoders. We set the features of the move (starting cell, the move ending cell, the piece at the starting cell, the piece at the ending cell, the promotion state, and the checking state) as a sequential input to a bi-directional RNN (Schuster and Paliwal, 1997). When a decoder requests attention contexts for hidden state h, the encoder offers E = [E M ; E S ] to build attentional context vector following Eq.9 and Eq.10. For Comparison, Planning, and Contexts, there are multiple moves derived from variations and predictions. The model needs to find the bright spots to describe. To encode these moves and offer precise information for the generation models, we propose a multi-choices encoder. Human commentators usually choose different aspects to comment according to their experiences. We use a global vector g to store our models' experiences and choose important moves to comment. Note that g is to be learned. In module (c) of Figure Then we calculate the soft weights of choices c = {c 1 , c 2 , ...} with respect to the board states S = {E 1 S , E 2 S , ...} by Eq.13. For hidden state vector h from decoder, attention weight matrix A = {A 1 , A 2 , ...} are scaled by c via Eq.14. And we finally get attentional context vector z according to A by Eq.15. This approach enables generation models to generate comments with attention to intriguing board states. And the attention weights can be more accurate when g accumulates abundant experiences in training. c = sof tmax(gS) (13) 4 Experiments We conduct our experiments on recently proposed Chess Commentary dataset We do not conduct experiments on the last category. And for the training of chess engine, we collect all of the standard chess game records in the past 10 years from FICS Games Database. And we remove the games where any player's rating below 2,000. There are 36M training data (for single move step) after cleaning. We train our neural chess engine using mixed data consisting of supervised FICS data and unsupervised self-play data. The number of self-play games are set to 0 initially. And it will be increased by 1 when the trained model beats the previous best version (with a wining rate larger than 0.55 in 20 games). During 400 iterations of training, we pick one strong engine and one weak engine for further experiments. The stronger engine loses 1 game and draws 55 games to the weak engine in 100 games. As mentioned in Section 3.2, when training generation models, we use the pretrained chess engine and fine-tune it with the generation models. Here we introduce our models and baselines in the experiments. We call our models the Skilled Chess Commentator (SCC) as they have the skills of playing chess. • SCC-weak: The generation models are integrated with the weak engine mentioned above, and they are trained independently with respect to the 5 categories in Chess Commentary dataset. • SCC-strong: The model is similar to SCCweak, but integrated with the strong engine. • SCC-mult: This is a multi-task learning model where generation models for different categories share the strong chess engine, move encoder, the multi-choices encoder and the value mapping matrix W val . • GAC: The state-of-the-art method proposed by • KWG: Another state-of-the-art method for game commentary generation • Temp: This is a template-based baseline methods. Together with the dataset, Harsh et al. ( • Re: This is a retrieval-based baseline method. For each input in the test set, we find the most matched datum in the training set by numbers of matched input board and move features. We develop both automatic evaluations and human evaluations to compare the models. For automatic evaluations, we use BLEU We also conduct human evaluation to make more convincing comparisons. We recruit 10 workers on Amazon Mechanical Turk • Fluency: Whether the comment is fluent and grammatical. • Accuracy: Whether the comment correctly describes current board and move. • Insights: Whether the comment makes appropriate predictions and thorough analysis. • Overall: The annotators' overall impression about comments. We present the automatic evaluation results in Table 1. Our SCC models outperform all of the baselines and previous state-of-the-art models. KWG and GAC provide competitive results. With the help of external information from powerful chess engines, GAC shows good performances on Quality and Comparison. Although our internal chess engine is no match for the external engines that GAC uses at playing chess, it turns out that our models with directly internal information can better bridge the semantic spaces of chess game and comment language. As for the comparisons within our models, SCC-strong turns to be better than SCC-weak, which supports our assumption that better skills enable more precise predictions, resulting in better comments. Training with multi-task learning seems to hurt the overall performances a little. But SCC-mult still has the state-of-the-art performances. And more important, it can react to all sub-tasks as a whole. The human annotators are required to be good at playing chess. That is to say, they are the true audiences of the commentator researches and applications. By introducing human evaluations, we further reveal the performances in the perspective of the audiences. We show the average scores and significance test results in Table To have a better view of comparisons among model outputs, we present and analyze some samples in Figure For the first example, black can exchange white's e3 knight and e4 pawn with the b4 bishop if white takes no action. But white chooses to protect the e3 knight with the g1 knight. All the models generate comments about Description. Temp directly describes the move without explanation. Re finds similar situation in the training set and explains the move as defense and developing. KWG is right about developing, but wrong about the position of the knight and the threats. GAC produces safe comment about the developing. And our model has a better understanding about the boards. It annotates the move correctly and even gives the reason why white plays this move. For the second example, the game is at the 3rd turn. White gives up the pawn on d5 and chooses to push the queen's pawn. Re and KWG both make a mistake and recognize the move d2d4 as Queen Pawn Opening. Temp thinks white is going to win because white have the advantage of one more pawn. However, Temp cannot predict that white will lose the advantage in the next move. Our model is able to predict the future moves via self-play. And it draws the conclusion that pushing the queen's pawn can open up the ways for the queen and bishop for future planning. In this work we propose a new approach for automated chess commentary generation. We come up with the idea that models capable of playing chess will generate good comments, and models with better playing strength will perform better in generation. By introducing a compatible chess engine to comment generation models, we get models that can mine deeper information and ground more insightful comments to the input boards and moves. Comprehensive experiments demonstrate the effectiveness of our models. Our experiment results show the direction to further developing the state-of-the-art chess engine to improve generation models. Another interesting direction is to extend our models to multimove commentary generation tasks. And unsupervised approaches to leverage massive chess comments in social media is also worth exploring. | 619 | 1,201 | 619 |
Simultaneous Translation Policies: From Fixed to Adaptive | Adaptive policies are better than fixed policies for simultaneous translation, since they can flexibly balance the tradeoff between translation quality and latency based on the current context information. But previous methods on obtaining adaptive policies either rely on complicated training process, or underperform simple fixed policies. We design an algorithm to achieve adaptive policies via a simple heuristic composition of a set of fixed policies. Experiments on Chinese→English and German→English show that our adaptive policies can outperform fixed ones by up to 4 BLEU points for the same latency, and more surprisingly, it even surpasses the BLEU score of full-sentence translation in the greedy mode (and very close to beam mode), but with much lower latency. | Simultaneous translation (ST) aims to provide good translation quality while keeping the latency of translation process as low as possible. This is very important for the scenarios that require simultaneity, such as international summits and negotiations. For this, human interpreters usually start translation before the source sentence ends. However, this makes the translation process much more challenging than the full-sentence translation, because to balance the translation quality and latency, interpreters need to make decisions on when to continue translation and when to stop temporarily to wait for more source side information, which are difficult, especially for syntactically divergent language pairs, such as German and English. The above decisions can be considered as two actions: READ (wait for a new source word) and WRITE (emit a translated target word) By contrast, adaptive policies try to make decisions on the fly using the currently available information. It is obvious that this kind of policies is more desirable for ST than the fixed ones, and different methods are explored to achieve an adaptive policy. The majority of such methods In this paper, we propose to achieve an adaptive policy via a much simpler heuristic composition of a set of wait-k policies (e.g., k = 1 ∼ 10). See Fig. | Full-sentence translation. Neural machine translation (NMT) model usually consists of two components: an encoder, which encodes the source sentence x = (x 1 , . . . , x m ) into a sequence of hidden states, and a decoder, which sequentially predicts target tokens conditioned on those hidden states and previous predictions. The probability of the predicted target sequence y = (y 1 , . . . , y n ) will be where y <t = (y 1 , . . . , y t-1 ) denotes the target sequence predicted before step t. Simultaneous translation. where g(t) is a monotonic non-decreasing function of t, denoting the number of processed source tokens when predicting y t . This function g(t) can be used to represent a policy for ST. Intuitively, this policy first waits k source tokens and then outputs predicted tokens concurrently with the rest of source sentence. In this example, we will choose an action based on the top probability p top , and apply a new policy (the dotted arrows) after the chosen action. Assume we have a set of wait-k policies and the corresponding models We can obtain an adaptive policy, whose lag at each step is between k min and k max , meaning that at each step, the target sequence falls behind the source sequence at most k max tokens and at least k min tokens. At each step, there is a wait-k policy synchronizing the adaptive policy, meaning that they have the same lag at that step. Specifically, at any step t, if the lag of the adaptive policy is k , then we apply the NMT model with the wait-k policy and force it to predict existing target tokens until step t, when the model will make a new prediction as the output of step t. However, the above method only shows how to simulate the adaptive policy to make a prediction at one step if we would like to write at that step, but it does not tell us at which steps we should write. We utilize the model confidence to make such a decision. Specifically, we set a probability threshold ρ k for each wait-k policy. At each step, if the NMT model follows a wait-k policy, and predicts the most likely token with probability higher than the threshold ρ k , then we consider the model is confident on this prediction, and choose WRITE action; otherwise, we choose READ action. Figure We define the process of applying a wait-k model M k with a wait-k policy on a given sequence pair (x, y) by the following which forces model M k to predict y, and returns the top token y top at the final step with the corresponding probability p top . The process of reading and returning a new source token is denoted by READ(), and expression x • x represents to append an element x to the end of sequence x. We denote by <s> and </s> the start symbol and end symbol of a sequence. Then Algorithm 1 gives the pseudocode of the above method. Algorithm 1 ST decoding with an adaptive policy Input: two integers k min and k max , a set of NMT models M k , and a sequence of thresholds Using the corresponding model M k with each waitk policies may not give us the best performance. If we have a set of models trained independently with different wait-k policies, then we can apply ensemble of those models Datasets and models. We conduct experiments on Chinese→English (ZH→EN) and German→English (DE→EN) translation. For ZH→EN, we use NIST corpus (2M sentence pairs) as training set, NIST 2006 as dev set, and NIST 2008 as test set. For DE→EN, we use WMT15 parallel corpus for training, newstest-2013 for validation and newstest-2015 for testing. All datasets are tokenized and segmented into sub-word units with byte-pair encoding Performance with different policies. We first evaluate the performance of each model with different policies, which helps us to choose models for different policies. Specifically, we apply each model with ten different wait-k policies on dev set to compare the performance. Fig. Comparing different methods. We compare our method with others from literature: wait-k method For our method, we test three different cases: (1) single, where for each policy we apply the corresponding model that trained with the same policy; (2) ensemble top-3, where for each policy we apply the ensemble of 3 models that achieve the highest BLEU scores with that policy on dev set; (3) ensemble all, where we apply the ensemble of all 10 models for each policy. For thresholds, we first choose ρ 1 and ρ 10 , and the other thresholds are computed in the following way: for integer 1 ≤ i ≤ 10 and d = (ρ 1 -ρ 10 )/9. We test with ρ 1 ∈ {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, ρ 10 = 0 and ρ 1 = 1, ρ 10 ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, totally 18 different settings in our experiments. The reason behind these settings is that we assume our adaptive policy cannot be either too aggressive or too conservative (as mentioned at the beginning of Section 3). The policy is the most aggressive for k = 1, so we set ρ 1 as the largest; while for k = 10 the policy is the most conservative, so we set ρ 10 the smallest. The comparison is provided in Fig. Efficiency. To evaluate the efficiency, we present in Table Method Time per Token Full-sentence 0.0122 s Wait-3 0.0162 s Single (ρ 1 = 0.4, ρ 10 = 0) 0.1057 s Ensemble Top-3 (ρ 1 = 0.4, ρ 10 = 0) 0.2085 s We have designed a simple heuristic algorithm to obtain an adaptive policy based on a set of wait-k policies, and applied ensemble in our method to improve the translation quality while maintaining low latency. Experiments show that our method not only outperforms the original wait-k method with relatively large gap, but also surpasses greedy full-sentence translation with much lower latency. We provide the complete results of Figure | 773 | 1,317 | 773 |
hyperdoc2vec: Distributed Representations of Hypertext Documents | Hypertext documents, such as web pages and academic papers, are of great importance in delivering information in our daily life. Although being effective on plain documents, conventional text embedding methods suffer from information loss if directly adapted to hyper-documents. In this paper, we propose a general embedding approach for hyper-documents, namely, hyperdoc2vec, along with four criteria characterizing necessary information that hyper-document embedding models should preserve. Systematic comparisons are conducted between hyperdoc2vec and several competitors on two tasks, i.e., paper classification and citation recommendation, in the academic paper domain. Analyses and experiments both validate the superiority of hyperdoc2vec to other models w.r.t. the four criteria. | The ubiquitous World Wide Web has boosted research interests on hypertext documents, e.g., personal webpages To model hypertext documents, various efforts Conventional attempts on utilizing embedding techniques in hyper-doc-related tasks generally fall into two types. The first type • What information should hyper-doc embedding models preserve, and what nice property should they possess? • Is there a general approach to learning taskindependent embeddings of hyper-docs? To answer the two questions, we formalize the hyper-doc embedding task, and propose four criteria, i.e., content awareness, context awareness, newcomer friendliness, and context intent aware-ness, to assess different models. Then we discuss simple downcasting-based adaptations of existing approaches w.r.t. the above criteria, and demonstrate that none of them satisfy all four. To this end, we propose hyperdoc2vec (h-d2v for short), a general embedding approach for hyperdocs. Different from most existing approaches, h-d2v learns two vectors for each hyper-doc to characterize its roles of citing others and being cited. Owning to this, h-d2v is able to directly model hyperlinks or citations without downgrading them. To evaluate the learned embeddings, we employ two tasks in the academic paper domain We summarize our contributions as follows: • We propose four criteria to assess different hyper-document embedding models. • We propose hyperdoc2vec, a general embedding approach for hyper-documents. • We systematically conduct comparisons with competing approaches, validating the superiority of h-d2v in terms of the four criteria. | Network representation learning is a related topic to ours since a collection of hyper-docs resemble a network. To embed nodes in a network, Document embedding for classification is another focused area to apply document embeddings. Le and Citation recommendation is a direct downstream task to evaluate embeddings learned for a certain kind of hyper-docs, i.e., academic papers. In this paper we concentrate on context-aware citation recommendation Embedding-based entity linking is another topic that exploits embeddings to model certain hyperdocs, i.e., Wikipedia We introduce notations and definitions, then formally define the embedding problem. We also propose four criteria for hyper-doc embedding models w.r.t their appropriateness and informativeness. Let w ∈ W be a word from a vocabulary W , and d ∈ D be a document id (e.g., web page URLs and paper DOIs) from an id collection D. After filtering out non-textual content, a hyper-document H is reorganized as a sequence of words and doc ids, Target doc … We also evaluate our model by computing the machine translation BLEU score … … (a) Hyper-documents. Citation as word BLEU evaluate Given a corpus of hyper-docs {H d } d∈D with D and W , we want to learn document and word embedding matrices D ∈ R k×|D| and W ∈ R k×|W | simultaneously. The i-th column d i of D is a kdimensional embedding vector for the i-th hyperdoc with id d i . Similarly, w j , the j-th column of W, is the vector for word w j . Once embeddings for hyper-docs and words are learned, they can facilitate applications like hyper-doc classification and citation recommendation. A reasonable model should learn how contents and hyperlinks in hyper-docs impact both D and W. We propose the following criteria for models: • Content aware. Content words of a hyperdoc play the main role in describing it, so the document representation should depend on its own content. For example, the words in • Context aware. Hyperlink contexts usually provide a summary for the target document. Therefore, the target document's vector should be impacted by words that others use to summarize it, e.g., paper • Newcomer friendly. In a hyper-document network, it is inevitable that some documents are not referred to by any hyperlink in other hyper-docs. If such "newcomers" do not get embedded properly, downstream tasks involving them are infeasible or deteriorated. • Context intent aware. Words around a hyperlink, e.g., "evaluate . . . by" in Figure We note that the first three criteria are for hyperdocs, while the last one is desired for word vectors. In this section, we first give the background of two prevailing techniques, word2vec and doc2vec. Then we present two conversion approaches for hyper-documents so that w2v and d2v can be applied. Finally, we address their weaknesses w.r.t. the aforementioned four criteria, and propose our hyperdoc2vec model. In the remainder of this paper, when the context is clear, we mix the use of terms hyper-doc/hyperlink with paper/citation. w2v is regarded as a special context vector to average. Analogously, pv-dbow uses IN document vector to predict its words' OUT vectors, following the same structure of skip-gram. Therefore in pv-dbow, words' IN vectors are omitted. To represent hyper-docs, a straightforward strategy is to convert them into plain documents in a certain way and apply w2v and d2v. Two conversions following this strategy are illustrated below. Citation as word. This approach is adopted by Context as content. It is often observed in academic papers when citing others' work, an author briefly summarizes the cited paper in its citation context. Inspired by this, we propose a contextas-content approach as in Figure Besides citation-as-word with w2v and contextas-content with d2v (denoted by d2v-cac for short), there is also an alternative using d2v on documents with citations removed (d2v-nc for 2 It is designed for document visualization purposes. short). We made a comparison of these approaches in Table First, w2v is not content aware. Following our examples in the academic paper domain, consider the paper (hyper-doc) The above limitations are caused by the conversions of hyper-docs where certain information in citations is lost. For a citation d s , C, d t , citationas-word only keeps the co-occurrence information between C and d t . Context-as-content, on the other hand, mixes C with the original content of d t . Both approaches implicitly downgrade citations d s , C, d t to C, d t for adaptation purposes. To learn hyper-doc embeddings without such limitations, we propose hyperdoc2vec. In this model, two vectors of a hyper-doc d, i.e., To model contents' impact on document vectors, we simply consider an additional objective function that is identical to pv-dm, i.e., enumerate words and contexts, and use the same input architecture as Figure (4) and use it to replace every log P (d t |d s , C). Following Unlike the other models in Table In this section, we first introduce datasets and basic settings used to learn embeddings. We then discuss additional settings and present experimental results of the two tasks, i.e., document classification and citation recommendation, respectively. Table We use three datasets from the academic paper domain, i.e., NIPS 4 ACL anthology 5 and DBLP 6 , as shown in Table In this task, we classify the research fields of papers given their vectors learned on DBLP. To obtain labels, we use Cora 8 , a small dataset of Computer Science papers and their field categories. We keep the first levels of the original categories, 4 7 In Table Second, owning to different context awareness, d2v-cac consistently outperforms d2v-nc in terms of all metrics and settings. Third, w2v has the worst performance. The reason may be that w2v is neither content aware nor newcomer friendly. We will elaborate more on the impacts of the two properties in Section 5.2.2. Finally, no matter whether DeepWalk vectors are used, h-d2v achieves the best F 1 scores. However, when OUT vectors are involved, h-d2v with DeepWalk has slightly worse performance. A possible explanation is that, when h-d2v IN and DeepWalk vectors have enough information to train the SVM classifiers, adding another 100 features (OUT vectors) only increase the parameter space of the classifiers and the training variance. For w2v with or without DeepWalk, it is also the case. This may be because information in w2v's IN and OUT vectors is fairly redundant. Because content awareness and newcomer friendliness are highly correlated in Table When writing papers, it is desirable to recommend proper citations for a given context. This could be achieved by comparing the vectors of the context and previous papers. We use all three datasets for this task. Embeddings are trained on papers before 1998, 2012, and 2009, respectively. The remaining papers in each dataset are used for testing. We compare h-d2v with all approaches in Sec-tion 4.2, as well as NPM First, for w2v vectors, Nalisnick et al. ( Second, for d2v-based approaches, we use the learned model to infer a document vector d for the context words, and use d to rank IN document vectors by cosine similarity. Among multiple attempts, we find this choice to be optimal. Third, for h-d2v, we adopt the same scoring and ranking configurations as for w2v. Finally, for NPM, we adopt the same ranking strategy as in In Table First, among all datasets, all methods perform relatively well on the medium-sized ACL dataset. This is because the smallest NIPS dataset provides too few citation contexts to train a good model. Moreover, DBLP requires a larger dimension size k to store more information in the embedding vectors. We increase k and report the Rec@10 scores in Figure Third, the d2v-cac approach outperforms its variant d2v-nc in terms of all datasets and metrics. This indicates that context awareness matters in the citation recommendation task. Fourth, the performance of NPM is sandwiched between those of w2v's two variants. We have tried our best to reproduce it. Our explanation is that NPM is citation-as-word-based, and only depends on citation contexts for training. Therefore, it is only context aware but neither content aware nor newcomer friendly, and behaves like w2v. Finally, when retrofitting pv-dm, h-d2v generally has the best performance. When we substitute pv-dm with random initialization, the performance is deteriorated by varying degrees on different datasets. This implies that content awareness is also important, if not so important than context awareness, on the citation recommendation task. Second, among all embedding-based methods, h-d2v has the best citation function classification results, which is close to Finally, the d2v-cac vectors are only good at Neutral, the largest class. On the other classes and global F 1 , they are outperformed by w2v vectors. To study how citation function affects citation recommendation, we combine the 2,824 labeled citation contexts and another 1,075 labeled contexts the authors published later to train an SVM, and apply it to the DBLP testing set to get citation functions. We evaluate citation recommendation performance of w2v (I4O), d2v-cac, and h-d2v on a per-citation-function basis. In Figure We focus on the hyper-doc embedding problem. We propose that hyper-doc embedding algorithms should be content aware, context aware, newcomer friendly, and context intent aware. To meet all four criteria, we propose a general approach, hyperdoc2vec, which assigns two vectors to each hyper-doc and models citations in a straightforward manner. In doing so, the learned embeddings satisfy all criteria, which no existing model is able to. For evaluation, paper classification and citation recommendation are conducted on three academic paper datasets. Results confirm the effectiveness of our approach. Further analyses also demonstrate that possessing the four properties helps h-d2v outperform other models. | 787 | 1,616 | 787 |
ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation | Residual networks are an Euler discretization of solutions to Ordinary Differential Equations (ODE). This paper explores a deeper relationship between Transformer and numerical ODE methods. We first show that a residual block of layers in Transformer can be described as a higher-order solution to ODE. Inspired by this, we design a new architecture, ODE Transformer, which is analogous to the Runge-Kutta method that is well motivated in ODE. As a natural extension to Transformer, ODE Transformer is easy to implement and efficient to use. Experimental results on the large-scale machine translation, abstractive summarization, and grammar error correction tasks demonstrate the high genericity of ODE Transformer. It can gain large improvements in model performance over strong baselines (e.g., 30.77 and 44.11 BLEU scores on the WMT'14 English-German and English-French benchmarks) at a slight cost in inference efficiency. | Residual networks have been used with a great success as a standard method of easing information flow in multi-layer neural models where F (•, •) is the function of the layer and θ t is its parameter. Interestingly, recent work in machine learning (2) * Corresponding author. where y(t) and θ(t) are continuous with respect to t. In this way, we can call Eq. ( This paper continues the line of research on the ODE-inspired method. The basic idea is to use a high-order method for more accurate numerical solutions to the ODE. This leads to a larger ODE block that generates a sequence of intermediate approximations to the solution. We find that the larger ODE block is sufficient to take the role of several ODE blocks with first-order solutions. The benefit is obvious: the use of fewer ODE blocks lowers the risk of introducing errors in block switching, and the high-order method reduces the approximation error in each ODE block. See Figure Our method is parameter-efficient because θ(t) is re-used within the same ODE block. As another "bonus", the model can be improved by learning coefficients of different intermediate approximations in a block. We evaluate our method in strong Transformer systems, covering both the wide (and big) model and the deep model. For machine translation tasks, ODE Transformer achieves 30.77 and 44.11 BLEU scores on the WMT'14 En-De and En-Fr test sets, setting a new state-of-the-art on the WMT'14 En-Fr task. It also significantly outperforms baselines on abstractive summarization and grammar error correction tasks. | We start with a description of Transformer, followed by its relationship with ODEs. We choose Transformer for our discussion and experiments because it is one of the state-of-the-art models in recent sentence generation tasks. Transformer is an example of the encoder-decoder paradigm where LN(•) is the layer normalization function, The relationship between ResNet and ODEs was first proposed by In numerical methods of ODEs, we want to ensure the precise solutions to the ODEs in a minimum number of computation steps. But the Euler method is not "precise" because it is a first-order method, and naturally with local truncation errors. The global error might be larger if we run it for a number of times. 2 This is obviously the case for Transformer, especially when the multi-layer neural network arises a higher risk of instability in solving the ODEs Here we use the Runge-Kutta methods for a higher order solution to ODEs where h is the step size and could be simply 1 in most cases. F i is an intermediate approximation to the solution at step t + α i h. α, β and γ are coefficients which can be determined by the Taylor series of y t+1 (10), and compute F i by This makes the system more parameter-efficient. As would be shown in our experiments, the highorder Runge-Kutta methods can learn strong NMT systems with significantly smaller models. The Runge-Kutta methods are general. For example, the Euler method is a first-order instance of them. For a second-order Runge-Kutta (RK2) block, we have This is also known as the improved Euler method. Likewise, we can define a fourth-order Runge-Kutta (RK4) block to be: See Figure In our preliminary experiments, the RK2 and RK4 methods yielded promising BLEU improvements when the model was shallow. But it was found that the improvements did not persist for deeper models. To figure out why this happened, let us review the Runge-Kutta methods from the angle of training. Take the RK2 method as an example. We rewrite Eq. ( Let E be the loss of training, L be the number blocks of the model, and y L be the model output. The gradient of E at y t is where Seen from Eq. ( The problem somehow attributes to the small coefficients of F i , that is, γ 1 = γ 2 = 1 2 . A natural idea is to empirically set γ i = 1 to eliminate the product factor of less than 1 in gradient computation, although this is not theoretically grounded in standard Runge-Kutta methods. We rewrite Eq. ( Then, we have the gradient, like this This model is easy to optimize because ∂E ∂y L can be passed to lower-level blocks with no scales. Note that, the methods here are instances of parameter sharing where ODE Transformer is efficient to use. As we only apply the ODE design schema to the encoder side, it only brings minor impacts on the inference speed due to the autoregressive decoding schema. Another concern here is memory consumption. ODE Transformer consumes more memory than the baseline in the same depth since we need to store the intermediate approximations in the forward pass. But the additional consumption is less than that of the baseline who has the same computation cost, which is acceptable for most scenarios. We give a quantitative analysis in Section 5. We evaluated the ODE Transformer on three sequence generation tasks: machine translation, abstractive summarization and grammar error correction. The datasets we used are elaborated in the following section, and more details of experimental setups could be found in Appendix A and B. Machine Translation We report results on three WMT benchmarks. For the WMT'14 English-German (En-De) task, the training data consisted of approximately 4.5M tokenized sentence pairs, as in For the WMT'16 English-Romanian (En-Ro) task, we replicated the setup of Abstractive Summarization We also tested the models' ability to process long sequences on the CNN-DailyMail summarization task We used a shared BPE with 30K operations, resulting in a vocabulary of 32, 580 entries. The evaluation metric was F1-Rouge Grammar Error Correction We used the following datasets as the training data, including National University of Singapore Corpus of Learner English (NUCLE) The truncation error analysis is conducted on the Penn Treebank Results of En-De and En-Fr Table Transformer 29M 27.84 Table When we switch to deep models, our method is more parameter efficient. E.g., RK2-block is comparable with a strong 48-layer system Here we investigate some interesting issues. For simplicity, we call RK2-block with coefficients initialized by 1 as RK2-block-v1, and learnable coefficients (Eq. ( In fact, we cannot obtain the "true" solution of each block output in NMT, because we mainly experimented on the encoder side. Instead, we tested our system on the language modeling task, where the perplexity between the single-layer model output and the ground truth could be regarded as the truncation error with no error propagations. Table Ablation Study on Different F (•, •) As stated in Section 3, the F (•, •) function can either be SAN, FFN or both of them (SAN+FFN). As shown in Figure We also collect the gradient information of several welltrained systems during training. Figure Then, we take a comprehensive analysis of several ODE design schemas. As stated in Deep Transformer models Recently, deep Transformer has witnessed tremendous success in machine translation, especially on WMT news tasks To speed up the training, an alternative way is to train a shallow model first and progressively increase the model depth This paper explores the relationship between Transformer and ODEs. We propose ODE Transformer to help the model benefit from high-order ODE solutions. Experimental results on the three representative sentence generations tasks (i.e., machine seeds, and we averaged the last 5/10 checkpoints for fair comparisons with previous work. The detail of Base/Deep/Wide configurations is as follows: • Base/Deep Model. The hidden size of selfattention was 512, and the dimension of the inner-layer in FFN was 2, 048. We used 8 heads for attention. For training, we set all dropout to 0.1 as default, including residual dropout, attention dropout, ReLU dropout. Label smoothing ls = 0.1 was applied to enhance the generation ability of the model. For deep models, we only enlarged the encoder depth considering the inference speed. • Wide (or Big) Model. We used the same architecture as Transformer-Base but with a larger hidden layer size 1, 024, more attention heads (16), and a larger feed forward inner-layer (4, 096 dimensions). The residual dropout was set to 0.3 for the En-De task and 0.1 for the En-Fr task. For the language modeling task, the hidden size was 512, and the filter size of the FFN was 2, 048. We set all the dropout rates as 0.1, including the residual dropout, attention dropout and ReLU dropout. Each model was trained up to 20 epochs, and most models achieved the lowest PPL on the validation set when the epoch is 10. Then the validation PPL began to increase, though the training PPL is still declining. The warmup step was 2, 000 and the batch size was 4, 096. The max learning rate was set to 0.0007. Evaluation For machine translation, we measured performance in terms of BLEU. Both tokenized BLEU and SacreBLEU 11 scores were re-11 Comparison on the CNN/DailyMail Dataset We summarize the previous results on the CNN/DailyMail dataset (See Table We have emphasized the importance of automatic coefficient learning in Section 3.2. The forward pass of RK2-block can be described as where γ 1 and γ 2 are coefficients which can be numerical suggested or learnable. Here we exhibit the comparison of various scaling methods on the WMT'14 En-De dataset, and the results are listed in Table Case Study on the GEC Task Table As we aforementioned, the ODE design schema somehow shares a similar merit with the weight sharing, especially when the coefficients are set to 1. This is because we reuse the same function F to compute the intermediate approximation at each timestep, and it is also an effective way to apply the higher-order ODE into the Transformer architecture. Compared with weight sharing (line 1 in Table Next, we make a detailed comparison between the proposed ODE Transformer and previous studies Compared with RKNet RKNet Source Social media sites such as Facebook has allow us to share our pictures or even chat online with our parents while we are overseas . Reference Social media sites such as Facebook have allowed us to share our pictures or even chat online with our parents while we are overseas . Baseline Social media sites such as Facebook allow us to share our pictures or even chat online with our parents while we are overseas . Other than that , I believe that the strong bond we have with our family is the biggest pillar of support to the carrier . Table Compared with N-ODE As we discussed in the related work, our work is complementary to Baier-Reinio and De Sterck (2020)'s work. We empirically demonstrate the effectiveness of integrating ODE design schema into Transformer on several sequence generation tasks. This work may shed light on the design of effective Transformer architectures from the numerical perspective and provides stronger baselines to the literature. Compared with CSAODE The differences between these two works are summarized below: (i) As we emphasized above, the benchmarks we experimented on are quite different. They mainly validated the proposed CSAODE on text classification and QA tasks. (ii) The proposed CSAODE On one side , it is obvioualy that many advantages have been brought to our lives . Reference On the one hand , it is obvious that many advantages have been brought to our lives . Baseline On one hand , it is obvious that many advantages have been brought to our lives . Source Other than that , I believe that the stong bond we have with our family is the biggest pillar of support to the carrier . Reference Other than that , I believe that the strong bond we have with our family is the biggest pillar of support to the carrier . Baseline Other than that , I believe that the stong bond we have with our family is the biggest pillar of support to the carrier . Let E be the loss of training, L be the number blocks of the model, and y L be the model output. Here, we define Then the information flow of the RK2 method can be described as follows: where ∂z k ∂y k = 1 + ∂F (y k ,θ k ) ∂y k . In this way, the detail derivation of Eq. ( With the chain rule, the error E propagates from the top layer y L to layer y t by the following formula: Here we have Then, put the Eq. ( Similarly, we can easily obtain the gradient of RK2 method where γ i = 1: | 927 | 1,558 | 927 |
Exploring Distributional Shifts in Large Language Models for Code Analysis | We systematically study how three large language models with code capabilities -CodeT5, Codex, and ChatGPT -generalize to out-ofdomain data. We consider two fundamental applications -code summarization, and code generation. We split data into domains following its natural boundaries -by an organization, by a project, and by a module within the software project. We establish that samples from each new domain present all the models with a significant challenge of distribution shift. We study how established methods adapt models to better generalize to new domains. Our experiments show that while multitask learning alone is a reasonable baseline, combining it with few-shot finetuning on examples retrieved from training data can achieve very strong performance. Moreover, this solution can outperform direct finetuning for very low-data scenarios. Finally, we consider variations of this approach to create a more broadly applicable method to adapt to multiple domains at once. We find that for code generation, a model adapted to multiple domains simultaneously performs on par with those adapted to a single domain 1 . | Since the late 2000s, researchers have been reporting poor generalization of statistical learning models to new software systems However, the challenges of distribution shifts stemming from the hierarchical nature of software data, as depicted in Figure Figure Next, we explore ways to improve the out-ofdomain generalization of large language models with code capabilities, recognizing that relying on labeled in-domain data for every new domain is impractical. Instead, we investigate the use of labeled out-of-domain data and small amounts of unlabelled in-domain data to enhance generalization. We test methods known to be successful in other transfer learning scenarios, such as metalearning Lastly, we study if we can make the code models more broadly applicable and retain their generalization capacities, rather than having to adapt them to every new domain? Depending on the approach to model adaptation (e.g. weight update vs in-context demonstrations) we vary the set of retrieved examples for each new domain, or for each test input individually. We compare performance obtained this way with that of the models that are adapted simultaneously to multiple domains (or instances, correspondingly). We find that Codex is very sensitive to these changes, so it is best to retrieve similar instances for each test data point. On the other hand, CodeT5 has a minor drop in code summarization and a negligible drop in code generation. This makes it feasible to adapt and apply CodeT5 to multiple domains simultaneously with minimal tradeoff, eliminating the need to store separate copies of the model for each domain. | The shifts in underlying semantics between the training and evaluation data can be one of the most impacting factors for deteriorating performance at test time. Prior work in code analysis has mainly focused on cross-project shifts, i.e. training and evaluating the model on disjunct sets of code projects. Additionally, the studies were mainly conducted in the context of traditional machine learning methods, such as linear classifiers, support vector machines, and later, LSTMs More recent works consider shifts caused by different authors of the code, the timeline of the project, distributions of code tokens, etc In-Context Learning. Retrieval Based Example Selection. It has been shown in For every code data point in the dataset, we have information about the organization, project, and the module within the project that the data point comes from. Based on this information, we can group data points into sets, and end up with three sets of sets, as illustrated in Figure We use CodeSearchNet We want to keep all of the domains in X test unseen, and for that reason, we remove any domain from X test that also appears in X train . This can happen because CodeSearchNet dataset is split into partitions by projects, so the same organizations can appear in different splits. This way, any domain coming from X test is, by our definition, outof-domain for any model trained on X train . We further split each domain τ i ⊂ X test into τ i train , τ i dev and τ i test . The evaluation is performed on τ i test . τ i train and τ i dev are used to obtain a proxy for the upper-bound performance of the model if the domain τ i was seen during training, i.e. if there is no distribution shift for τ i test . Preprocessing We use the "path" field of the data point to determine each code snippet's organization, repository, and lowest-level folder. Using 5 different random seeds, we divide a domain into τ i train , τ i dev , and τ i test . We aim to have at least 32 samples each in τ i test and τ i dev , and up to 32 samples for τ i train . Thus, from X test we filter any domain that has less than 96 samples in total. Final dataset statistics are presented in Table We study two generation applications: code summarization and code generation. Code summarization aims to summarize a code snippet into a natural language description. The code snippet in CodeSearchNet dataset is a function, while the natural language description is the docstring of that function. This task is evaluated with BLEU-4 We experiment with three large language models: (1) CodeT5 In this section, we formulate the research questions that we aim to answer and give a more detailed description of the setups that we have used for analyzing and answering each question. RQ 1 How do code models perform on new domains? We test the models' capacity for generalization to new domains by comparing the performance of the models that have been adapted to the new domain using few-shot instances of in-domain data (ID) vs those that only encountered out-of-domain (OOD) data. For CodeT5, few-shot domain adap-tation data is used to update the model weights, whereas for Codex, it is included as demonstrations in the prompt to the model. To answer RQ1, we start from a pre-trained checkpoint of the model and experiment with different approaches for domain adaptation. To answer the current question, we additionally consider different methods to use before the domain adaptation stage, particularly, multi-task learning and meta-learning. The resulting setups are illustrated in Figure For GPT models, we do not perform weight updates. Very large models have been shown to be capable to generalize to unseen tasks with just an instruction. Thus, we evaluate these models with just the task instruction, for example, "Summarize following JavaScript code", and input (i.e. instruction only). Models can be sensitive to the wording of the instructions, so we use a number of different instruction variations for each application and average the results. The full list of instruction variations that we have used with Codex and ChatGPT models is presented in Appendix, Section 7.10. Moreover, larger models have been shown to "learn" from demonstration examples that are provided as part of their input, even though this process does not involve any weight updates. This phenomenon is known as in-context learning (ICL), which is what we use for domain adaptation for GPT models. Due to the limit on the size of the input to the models (4096 tokens), we use as many demonstrations as would fit, including up to 8 demonstrations with each test example. And since the models can also be sensitive to the order of examples, we shuffle the order of the demonstrations 5 times and average the results. Tables 2 and 3 demonstrate the performance obtained by CodeT5, and Table While there is a difference in performance for CodeT5 model on code generation ID and OOD, the performance difference is next to negligible. We hypothesize that this can be due to the fact that code generation is a more challenging task for a large language model, and so the effect of distribution shift is less noticeable. This observation becomes evident when examining Table From Table We have seen that models for code performed significantly better after being adapted for new domains using in-domain data. However, there are many reasons why adapting to every new domain with the help of labeled examples might be impractical. Thus, we consider some alternative approaches, that would not require labeled data but can hopefully close the performance gap partially or fully. Figure In addition to the strategies above, we experiment with a domain adaptation method that does not require in-domain labeled data for supervision. We use a similarity metric on embeddings obtained from the pre-trained CodeT5 model checkpoint to retrieve k most similar examples for every example in τ test from X train . We set k to 4, 8, or 32, and since |τ test | = 32 the combined size of the set would be 128, 256, or 1024. Finally, we remove any duplicates. We refer to this set as τ ret . For similarity metric, we experiment with cosine similarity, as well as a more recent approach -IsoScore Finding: Strategic adaptation is advantageous in very low data scenarios Figure The same pattern is evident in the challenge evaluation scenario, presented in Figure Adapting the model trained with MTL objective to test domains with the help of stratified supervision provides a considerable boost to the performance of CodeT5 and Codex. Results for CodeT5 are shown in Figure First of all, we notice that there is a saturation in terms of gained performance vs the number of stratified supervision or demonstration examples used. For CodeT5 using 32 examples per test instance is almost always worse than using 4 or 8 exam-ples. For Codex, using 4 or 8 examples results in approximately the same performance. Next, for code summarization, retrieving 4 or 8 examples from out-of-domain train data leads to performance comparable, or even better, than that of the model adapted using 8 examples from the test domain. This trend is observed for both Codex and CodeT5, particularly strongly when generalizing to new repositories and new organizations. A similar trend can be observed for code generation, and to a much stronger degree for CodeT5 -stratified supervision models can even outperform models trained with 32 examples from the test domain. While the performance of the stratified supervision models plateau after a certain number of examples, supervision on in-domain samples does not demonstrate such a trend. RQ 3 Can we have more generic solutions for out-of-domain generalization? From our analysis of RQ2, we see that models can generalize better to new domains without relying on labeled data from that domain. Unfortunately, this still requires adapting to every test domain individually for CodeT5, and even more strictly -to every test sample individually -for Codex. For example, for CodeT5, this means maintaining multiple copies of the model, performing the training for the adaptation stage multiple times, and storing a large amount of out-of-domain data to retrieve examples from. In this RQ, we experiment with approaches that would eliminate the need to train CodeT5 on multiple domains separately. For Codex, we experiment with sampling from demonstrations collected for the entire domain. For CodeT5, we try two approaches. First, we finetune it on the combined set of τ ret for all domains. We also try using fast vote-k algorithm Finding: Multi-domain code generation models do not require a large performance sacrifice. The results for both models are presented in Table 5. Results for CodeT5 for this experiment are referred to as "FT: combined k", where k is the number of retrieved examples per test example. Fast vote-k is less effective as an adaptation technique compared to fine-tuning on a combined set of retrieved examples, and the results for it are presented in the Appendix Section 7.9. As can be seen, training a single model on combined retrieved samples results in a moderate drop in performance for code summarization, and a negligible drop for code generation. In other words, a model finetuned on stratified supervision data for new domains can be a viable solution for the out-of-domain general-ization problem for code generation. Interestingly, this also indicates that for code generation, good performance on one domain does not hinder the performance on another domain, i.e. there is little to no negative transfer between different domains. For Codex, the results of the experiment are referred to as "ICL: k from τ ret " in Table We evaluate large language models for code -CodeT5, Codex (code-cushman-001), and Chat-GPT (gpt-3.5-turbo) -on two fundamental code applications -code generation and code summarization. We study how the models perform under distribution shifts that can commonly occur due to the nature of the software. We experiment with three granularities for defining domains in applications for code -organization, project, and module or folder. Our experiments show that all models evaluated are susceptible to reduced performance due to domain shifts. We experiment with a number of training and domain adaptation techniques for achieving better out-of-domain generalization. We discover that retrieving similar out-of-domain examples from training data is the most effective approach for adapting to new domains in the absence of in-domain data. In addition, we experiment with adapting models to multiple new domains simultaneously and find that such models can perform very well for code generation. However, we find the generality of the model to be a tradeoff for its performance for code summarization. As can be seen from Table The Javascript keywords that we included in the CodeBleu implementation for evaluation is listed in table 7.1. 7.2 Extended Background 7.2.1 Meta-learning and Multi-task-learning Meta-learning focuses on adapting knowledge gained from previous tasks to be applied to new tasks with limited training examples. Most meta-learning algorithms can be categorized into three groups: 1) Black-box meta-learning approaches Multi-task Learning aims to jointly learn several related tasks providing a generalized representation with the added benefit of compute and memory in terms of shared model parameters In our work, we have experimented with both MAML and multi-task learning to check which of the method gives us a better prior for few-shot performance in our setting. Parameter-efficient finetuning: Conventional fine-tuning methods retrains all the model parameters for every new task, which becomes infeasible as the model size increases to the level of GPT-3. In recent times, parameter-efficient methods have been studied and it has been demonstrated that state-of-the-art PEFT methods can match the performance of finetuning all the model's parameters while updating only a tiny fraction of the model parameters. Initially adapters In a study conducted by To better understand how different splits of domains are different from each other, we visualize our resulting test domains in Figure ChatGPT ChatGPT is a conversational variant derived from InstructGPT/GPT 3.5 model For full finetuning of CodeT5, we updated the model for 500 steps using batch size of 8, the best model was identified by the performance on the τ dev portion. For LoRA, we use a rank of 4 with an initialization scale of 0.01 and update all the at-tention and feedforward layers. We train for 1000 steps with a batch size of 8. For multitask learning (MTL) of CodeT5, we update the model for 150K steps on 80% of the X train data, using a batch size of 4. The best checkpoint is selected by evaluating the model on the remaining 20% of X train which was held-out from training. For dual-gen MTL, we followed the same train/dev division strategy as for MTL for code generation, and updated the model for 150K steps with batch size of 4. The best checkpoints were again decided by evaluating the model on the created development set. In particular, we selected two checkpoints -one according to CodeBLEU metric, and another according to BLEU metric for code generation and code summarization respectively. For Model-agnostic meta-learning, we updated the model from the pretrained CodeT5 checkpoint for 10K steps and used the last checkpoint in our experiments. The Vault is a multilingual dataset extracted from GitHub. Despite the fact that it comes pretokenized, we noticed that some of the preprocessing for The Vault is different from the preprocessing of Code-SearchNet. For example, while CodeSearchNet | 1,126 | 1,623 | 1,126 |
Weakly Supervised Semantic Parsing with Abstract Examples | Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways. First, a large search space of potential programs needs to be explored at training time to find a correct program. Second, spurious programs that accidentally lead to a correct denotation add noise to training. In this work we propose that in closed worlds with clear semantic types, one can substantially alleviate these problems by utilizing an abstract representation, where tokens in both the language utterance and program are lifted to an abstract form. We show that these abstractions can be defined with a handful of lexical rules and that they result in sharing between different examples that alleviates the difficulties in training. To test our approach, we develop the first semantic parser for CNLVR, a challenging visual reasoning dataset, where the search space is large and overcoming spuriousness is critical, because denotations are either TRUE or FALSE, and thus random programs are likely to lead to a correct denotation. Our method substantially improves performance, and reaches 82.5% accuracy, a 14.7% absolute accuracy improvement compared to the best reported accuracy so far. | The goal of semantic parsing is to map language utterances to executable programs. Early work on statistical learning of semantic parsers utilized * Authors equally contributed to this work. | k :[[{y loc: ..., color: 'Black', type: 'square', x loc: ... size: 20}, .. x :There is a small yellow item not touching any wall y :True z :Exist(Filter(ALL ITEMS, λx. Figure supervised learning, where training examples included pairs of language utterances and programs Training semantic parsers from denotations rather than programs complicates training in two ways: (a) Search: The algorithm must learn to search through the huge space of programs at training time, in order to find the correct program. This is a difficult search problem due to the combinatorial nature of the search space. (b) Spurious-ness: Incorrect programs can lead to correct denotations, and thus the learner can go astray based on these programs. Of the two mentioned problems, spuriousness has attracted relatively less attention Recently, the Cornell Natural Language for Visual Reasoning corpus (CNLVR) was released In this paper, we present the first semantic parser for CNLVR. Semantic parsing can be coarsely divided into a lexical task (i.e., mapping words and phrases to program constants), and a structural task (i.e., mapping language composition to program composition operators). Our core insight is that in closed worlds with clear semantic types, like spatial and visual reasoning, we can manually construct a small lexicon that clusters language tokens and program constants, and create a partially abstract representation for utterances and programs (Table We show that with abstract representations we can share information across examples and better tackle the search and spuriousness challenges. By pulling together different examples that share the same abstract representation, we can identify programs that obtain high reward across multiple examples, thus reducing the problem of spuriousness. This can also be done at search time, by augmenting the search state with partial programs that have been shown to be useful in earlier iterations. Moreover, we can annotate a small number of abstract utterance-program pairs, and automatically generate training examples, that will be used to warm-start our model to an initialization point in which search is able to find correct programs. We develop a formal language for visual reasoning, inspired by All our code is publicly available at , where x i is an utterance, k i is a KB describing objects in an image and y i ∈ {TRUE, FALSE} denotes whether the utterance is true or false in the KB, our goal is to learn a semantic parser that maps a new utterance x to a program z such that when z is executed against the corresponding KB k, it yields the correct denotation y (see Fig. The original KBs in CNLVR describe an image as a set of objects, where each object has a color, shape, size and location in absolute coordinates. We define a programming language over the KB that is more amenable to spatial reasoning, inspired by work on the CLEVR dataset x: "There are exactly 3 yellow squares touching the wall." z: x: "There are C-QuantMod C-Num C-Color C-Shape touching the wall." z: C-QuantMod x: "There is a small yellow item not touching any wall." z: Exist(Filter x: "One tower has a yellow base." z: GreaterEqual Table Unlike CLEVR, CNLVR requires substantial set-theoretic reasoning (utterances refer to various aspects of sets of items in one of the three boxes in the image), which required extending the language described by We base our model on the semantic parser of More formally the probability of a program is the product of the probability of its tokens given the history: and the probability of a decoded token is computed as follows. First, a Bi-LSTM encoder converts the input sequence of utterance embeddings into a sequence of forward and backward states h Then decoding produces the program token-by-token: where φ z is an embedding for program token z, v is a bag-of-words vector for the tokens in x, z i:j = (z i , . . . , z j ) is a history vector of size K, the matrices W q , W α , W s are learned parameters (along with the LSTM parameters and embedding matrices), and ';' denotes concatenation. Search: Searching through the large space of programs is a fundamental challenge in semantic parsing. To combat this challenge we apply several techniques. First, we use beam search at decoding time and when training from weak supervision (see Sec. 4), similar to prior work , where 1(z t | z 1:t-1 ) indicates whether a certain program token is valid given the program prefix. Discriminative re-ranking: The above model is a locally-normalized model that provides a distribution for every decoded token, and thus might suffer from the label bias problem and is normalized over all programs in the beam. The scoring function s ψ (x, z) is a neural network with identical architecture to the locallynormalized model, except that (a) it feeds the decoder with the candidate program z and does not generate it. (b) the last hidden state is inserted to a feed-forward network whose output is s ψ (x, z). Our final ranking score is p θ (z|x)p g ψ (z | x). We now describe our basic method for training from weak supervision, which we extend upon in Sec. 5 using abstract examples. To use weak supervision, we treat the program z as a latent variable that is approximately marginalized. To describe the objective, define R(z, k, y) ∈ {0, 1} to be one if executing program z on KB k results in denotation y, and zero otherwise. The objective is then to maximize p(y | x) given by: where Z is the space of all programs and B ⊂ Z are the programs found by beam search. In most semantic parsers there will be relatively few z that generate the correct denotation y. However, in CNLVR, y is binary, and so spuriousness is a central problem. To alleviate it, we utilize a property of CNLVR: the same utterance appears 4 times with 4 different images. Thus, we can re-define each training example to be (x, {(k j , y j )} 4 j=1 ), where each utterance x is paired with 4 different KBs and the denotations of the utterance with respect to these KBs. Then, we maximize p({y j } 4 j=1 | x, ) by maximizing the objective above, except that R(z, {k j , y j } 4 j=1 ) = 1 iff the denotation of z is correct for all four KBs. This dramatically reduces the problem of spuriousness, as the chance of randomly obtaining a correct denotation goes down from 1 2 to 1 16 . This is reminiscent of We train the discriminative ranker analogously by maximizing the probability of programs with correct denotation z∈B p g ψ (z | x)R(z, k, y). This basic training method fails for CNLVR (see Sec. 6), due to the difficulties of search and spuriousness. Thus, we turn to learning from abstract examples, which substantially reduce these problems. The main premise of this work is that in closed, well-typed domains such as visual reasoning, the main challenge is handling language compositionality, since questions may have a complex and nested structure. Conversely, the problem of mapping lexical items to functions and constants in the programming language can be substantially alleviated by taking advantage of the compact KB schema and typing system, and utilizing a 1. "There are exactly 3 yellow squares touching the wall." 2. "There are at least 2 blue circles touching the wall." While the surface forms of these utterances are different, at an abstract level they are similar and it would be useful to leverage this similarity. We therefore define an abstract representation for utterances and logical forms that is suitable for spatial reasoning. We define seven abstract clusters (see Table As we show next, abstract examples can be used to improve the process of training semantic parsers. Specifically, in sections 5.1-5.3, we use abstract examples in several ways, from generating new training data to improving search accuracy. The combined effect of these approaches is quite dramatic, as our evaluation demonstrates. We begin by demonstrating that abstraction leads to rather effective coverage of the types of questions asked in a dataset. Namely, that many ques-tions in the data correspond to a small set of abstract examples. We created abstract representations for all 3,163 utterances in the training examples by mapping utterance tokens to their cluster label, and then counted how many distinct abstract utterances exist. We found that as few as 200 abstract utterances cover roughly half of the training examples in the original training set. The above suggests that knowing how to answer a small set of abstract questions may already yield a reasonable baseline. To test this baseline, we constructured a "rule-based" parser as follows. We manually annotated 106 abstract utterances with their corresponding abstract program (including alignment between abstract tokens in the utterance and program). For example, Table Given this set of manual annotations, our rulebased semantic parser operates as follows: Given an utterance x, create its abstract representation x. If it exactly matches one of the manually annotated utterances, map it to its corresponding abstract program z. Replace the abstract program tokens with real program tokens based on the alignment with the utterance tokens, and obtain a final program z. If x does not match return TRUE, the majority label. The rule-based parser will fail for examples not covered by the manual annotation. However, it already provides a reasonable baseline (see Table While the rule-based semantic parser has high precision and gauges the amount of structural variance in the data, it cannot generalize beyond observed examples. However, we can automatically generate non-abstract utterance-program pairs from the manually annotated abstract pairs and train a semantic parser with strong supervision that can potentially generalize better. E.g., consider the utterance "There are exactly 3 yellow squares touching the wall", whose abstract representation is given in Table // C is a map where the key is an abstract utterance and the value is a pair (Z, R) of a list of abstract programs Z and their average rewards R. D is an integer. 3: x ← Abstract utterance of x 4: A ← D programs in C[x] with top reward values 5: B1 ← compute beam of programs of length 1 6: for t = 2 . . . T do // Decode with cache 7: Bt ← construct beam from Bt-1 8: At = truncate(A, t) 9: Bt.add(de-abstract(At)) 10: for z ∈ BT do //Update cache 11: Update rewards in C[x] using (z, R(z, y)) 12: return BT ∪ de-abstract(A). to the program of the first utterance, with IsBlue replacing IsYellow. More generally, we can sample any abstract example and instantiate the abstract clusters that appear in it by sampling pairs of utterance-program tokens for each abstract cluster. Formally, this is equivalent to a synchronous context-free grammar We now describe a caching mechanism that uses abstract examples to combat search and spuriousness when training from weak supervision. As shown in Sec. 5.1, many utterances are identical at the abstract level. Thus, a natural idea is to keep track at training time of abstract utteranceprogram pairs that resulted in a correct denotation, and use this information to direct the search procedure. Concretely, we construct a cache C that maps abstract utterances to all abstract programs that were decoded by the model, and tracks the average reward obtained for those programs. For every utterance x, after obtaining the final beam of programs, we add to the cache all abstract utteranceprogram pairs (x, z), and update their average reward (Alg. 1, line 10). To construct an abstract example (x, z) from an utterance-program pair (x, z) in the beam, we perform the following procedure. First, we create x by replacing utterance tokens with their cluster label, as in the rule-based semantic parser. Then, we go over every program token in z, and replace it with an abstract cluster if the utterance contains a token that is mapped to this program token according to the mappings from Table We propose two variants for taking advantage of the cache C. Both are shown in Algorithm 1. 1. Full program retrieval (Alg. 1, line 12): Given utterance x, construct an abstract utterance x, retrieve the top D abstract programs A from the cache, compute the de-abstracted programs Z using alignments from program tokens to utterance tokens, and add the D programs to the final beam. 2. Program prefix retrieval (Alg. 1, line 9): Here, we additionally consider prefixes of abstract programs to the beam, to further guide the search process. At each step t, let B t be the beam of decoded programs at step t. For every abstract program z ∈ A add the de-abstracted prefix z 1:t to B t and expand B t+1 accordingly. This allows the parser to potentially construct new programs that are not in the cache already. This approach combats both spuriousness and the search challenge, because we add promising program prefixes to the beam that might have fallen off of it earlier. Fig. A high-level overview of our entire approach for utilizing abstract examples at training time for both data augmentation and model training is given in Fig. Model and Training Parameters The Bi-LSTM state dimension is 30. The decoder has one hidden layer of dimension 50, that takes the last 4 decoded tokens as input as well as encoder states. Token embeddings are of dimension 12, beam size is 40 and D = 10 programs are used in Algorithm 1. Word embeddings are initialized from CBOW Pre-processing Because the number of utterances is relatively small for training a neural model, we take the following steps to reduce sparsity. We lowercase all utterance tokens, and also use their lemmatized form. We also use spelling correction to replace words that contain typos. After pre-processing we replace every word that occurs less than 5 times with an UNK symbol. Evaluation We evaluate on the public development and test sets of CNLVR as well as on the hidden test set. The standard evaluation metric is accuracy, i.e., how many examples are correctly classified. In addition, we report consistency, which is the proportion of utterances for which the decoded program has the correct denotation for all 4 images/KBs. It captures whether a model consistently produces a correct answer. Baselines We compare our models to the MA-JORITY baseline that picks the majority class (TRUE in our case). We also compare to the stateof-the-art model reported by Test when taking the KB as input, which is a maximum entropy classifier (MAXENT). For our models, we evaluate the following variants of our approach: • RULE: The rule-based parser from Sec. 5.1. • SUP.: The supervised semantic parser trained on augmented data as in Sec. 5.2 (5, 598 examples for training and 560 for validation). • WEAKSUP.: Our full weakly-supervised semantic parser that uses abstract examples. • +DISC: We add a discriminative re-ranker (Sec. 3) for both SUP. and WEAKSUP. Main results Table We analyze our results by running multiple ablations of our best model W.+DISC on the development set. To examine the overall impact of our procedure, we trained a weakly-supervised parser from scratch without pre-training a supervised parser nor using a cache, which amounts to a re-implementation of the RANDOMER algorithm To further examine the importance of abstraction, we decoupled the two contributions, training once with a cache but without data augmentation for pre-training (-DATAAUGMENTATION), and again with pre-training over the augmented data, but without the cache (-BEAMCACHE). We found that the former improves by a few points over the MAXENT baseline, and the latter performs comparably to the supervised parser, that is, we are still unable to improve learning by training from denotations. Lastly, we use a beam cache without line 9 in Alg. 1 (-EVERYSTEPBEAMCACHE). This already results in good performance, substantially higher than SUP. but is still 3.4 points worse than our best performing model on the development set. Orthogonally, to analyze the importance of tying the reward of all four examples that share an utterance, we trained a model without this tying, where the reward is 1 iff the denotation is correct (ONEEXAMPLEREWARD). We find that spuriousness becomes a major issue and weaklysupervised learning fails. Error Analysis We sampled 50 consistent and 50 inconsistent programs from the development set to analyze the weaknesses of our model. By and large, errors correspond to utterances that are more complex syntactically and semantically. In about half of the errors an object was described by two or more modifying clauses: "there is a box with a yellow circle and three blue items"; or nesting occurred: "one of the gray boxes has exactly three objects one of which is a circle". In these cases the model either ignored one of the conditions, resulting in a program equivalent to "there is a box with three blue items" for the first case, or applied composition operators wrongly, outputting an equivalent to "one of the gray boxes has exactly three circles" for the second case. However, in some cases the parser succeeds on such examples and we found that 12% of the sampled utterances that were parsed correctly had a similar complex structure. Other, less frequent reasons for failure were problems with cardinality interpretation, i.e. ,"there are 2" parsed as "exactly 2" instead of "at least 2"; applying conditions to items rather than sets, e.g., "there are 2 boxes with a triangle closely touching a corner" parsed as "there are 2 triangles closely touching a corner"; and utterances with questionable phrasing, e.g., "there is a tower that has three the same blocks color". Other insights are that the algorithm tended to give higher probability to the top ranked program when it is correct (average probability 0.18), compared to cases when it is incorrect (average probability 0.08), indicating that probabilities are correlated with confidence. In addition, sentence length is not predictive for whether the model will succeed: average sentence length of an utterance is when the model is correct, and 11.1 when it errs. We also note that the model was successful with sentences that deal with spatial relations, but struggled with sentences that refer to the size of shapes. This is due to the data distribution, which includes many examples of the former case and fewer examples of the latter. Training semantic parsers from denotations has been one of the most popular training schemes for scaling semantic parsers since the beginning of the decade. Early work focused on traditional log-linear models Visual reasoning has attracted considerable attention, with datasets such as VQA Our method for generating training data resembles data re-combination ideas in While spuriousness is central to semantic parsing when denotations are not very informative, there has been relatively little work on explicitly tackling it. In this work we presented the first semantic parser for the CNLVR dataset, taking structured representations as input. Our main insight is that in closed, well-typed domains we can generate abstract examples that can help combat the difficulties of training a parser from delayed supervision. First, we use abstract examples to semiautomatically generate utterance-program pairs that help warm-start our parameters, thereby reducing the difficult search challenge of finding correct programs with random parameters. Second, we focus on an abstract representation of examples, which allows us to tackle spuriousness and alleviate search, by sharing information about promising programs between different examples. Our approach dramatically improves performance on CNLVR, establishing a new state-of-the-art. In this paper, we used a manually-built highprecision lexicon to construct abstract examples. This is suitable for well-typed domains, which are ubiquitous in the virtual assistant use case. In future work we plan to extend this work and automatically learn such a lexicon. This can reduce manual effort and scale to larger domains where there is substantial variability on the language side. | 1,239 | 190 | 1,239 |
Arabic Dialect Identification with a Few Labeled Examples Using Generative Adversarial Networks | Given the challenges and complexities introduced while dealing with Dialect Arabic (DA) variations, Transformer based models, e.g., BERT, outperformed other models in dealing with the DA identification task. However, to fine-tune these models, a large corpus is required. Getting a large number high quality labeled examples for some Dialect Arabic classes is challenging and time-consuming. In this paper, we address the Dialect Arabic Identification task. We extend the transformer-based models, ARBERT and MARBERT, with unlabeled data in a generative adversarial setting using Semi-Supervised Generative Adversarial Networks (SS-GAN). Our model enabled producing high-quality embeddings for the Dialect Arabic examples and aided the model to better generalize for the downstream classification task given few labeled examples. Experimental results showed that our model reached better performance and faster convergence when only a few labeled examples are available. | While Arabic is the first language of most of the Middle East and North Africa (MENA) region, different countries have different dialects of Arabic. These Dialect Arabic (DA) forms are all different from the Modern Standard Arabic (MSA). MSA is used in formal writing and speaking situations, like academia and media. In contrast, DA is the language of the street. DA is spoken by people informally in their daily conversations and on social media platforms. The task of automatically identifying the dialect of Arabic is beneficial since it contributes to many downstream tasks and applications, such as Speech Recognition and Machine translation. Some Arabic Dialects are very close to each other (e.g. Some datasets are imbalanced with few classes dominating the whole dataset. Figure Given these challenges, getting a large corpus of labeled DA examples for all Arab countries is challenging and time-consuming. These complexities represent a major challenge in the Arabic Dialect Identification task. We aim to improve the transformer-based models, i.e., BERT In this paper, we extend BERT-based models, ARBERT and MARBERT The contributions of this work are: • Adopting the semi-supervised setting using GAN • We study the classification of Dialect Arabic against very small training datasets using our extended GAN models. The training sets were sampled from 4 different Arabic datasets: QADI • We applied a 2-stage setup, training the GAN extended model for some epochs and then, having a second stage of BERT-based model training. These early GAN epochs boosted BERT-based model convergence speed and performance results. The 2-stages experiment outperformed the BERT-based models for the same number of epochs. The rest of the paper is organized as follows: in section 2, we discuss the related work in the Dialect Arabic Identification task and variations of BERTbased models. In section 3, we illustrate the system components and model architectures. We show the conducted experiments and their results, in section 4. Finally, we give a brief conclusion based on our work and the obtained results. 2 Related Work | The main challenge in Arabic Dialect Identification is the rarity of high-quality labeled datasets that represent all Arabic dialects. Recently, some datasets were introduced. However, most of them have limitations as will be shown in the next paragraphs. The Arabic Online Commentary AOC Dialect Identification shared tasks impassioned the Arabic DA work. The Multi Arabic Dialects Application and Resources (MADAR) ArSarcasm The First Nuanced Arabic Dialect Identification Shared Task (NADI 2020) QADI In Adversarial settings were also introduced on top of BERT-based models to generate different examples, which help in various text classification tasks. BAE 3 Adopted Model One of the key challenges in Arabic Dialect Identification research is insufficient labeled datasets. Many datasets don't fairly represent all classes, i.e., imbalanced datasets. Other datasets suffer from labeling noise. Although having a sufficient amount of unlabeled data is extremely easy, e.g. crawling tweets, the process of labeling these examples with correct labels is expensive, impractical, and timeconsuming. Some easier methods are adopted while labeling such data, e.g., depending on Twitter users' geographic location or account metadata. Unfortunately, these methods are not accurate to representing correct classes and lead to many misslabeled examples. Arabic is a highly inflected and derivational language. The inflection and derivation rules may change from one Arabic Dialect to another. Moreover, the same word might have totally different meaning in different Arabic Dialects. For instance, the word (Mahdoum) meaning in MSA and Egyptian dialect is digested, which is used to describe food. While in Levantine Arabic (dialects spoken in Syria, Lebanon, Jordan and Palestine), its meaning is joyful or delightful, and used to describe persons. These specific characteristics of Our work is mainly based on GAN-BERT model The Discriminator has 3 inputs: fake examples generated by the Generator (x*), real unlabeled examples (x), and real labeled training examples (x, y), with y denoting the label for the given example x. In this work, we extend BERT-based models using SS-GAN. We use BERT-based models pretrained on Arabic datasets, namely ARBERT and MARBERT Given an input example, e = (t 1 , t 2 , , .., t n ), BERT model's output is an n + 2 vector representation in R d , i.e., (h CLS , h 1 , h 2 , .., h SEP ). As advised in The generator G is a Multi-Layer Perceptron (MLP) that takes an input of a 100-dimensional random noise vector drawn from Normal Distribution N (µ, σ 2 ) and outputs a vector h f ake ∈ R d . As shown in Figure As discussed in section 3, we extend BERTbased models with a generative adversarial setting. The generator G is an MLP with a single hidden layer activated by a leaky relu function. The generator G input is a random noise vector drawn from the Normal distribution N (0, 1). The generator G output is a 768-dimensional vector that represents the fake generated examples. The discriminator D is another similar MLP with a final softmax layer for the final dialect classification. We use a dropout rate of 0.2 after the hidden layer in both G and D. We chose the best performing BERT-based pretrained model as the base model for each dataset, as reported in The experiment showed that adding the first stage with the semi-supervised setting helped the base model to better generalize for a few labeled examples and to converge faster.. Overall, the 2stages setup outperformed the base model. For ArSarcasm One of the main challenges of the Arabic Dialect Identification task is the rarity of high-quality labeled examples. This paper addresses this problem by adopting adversarial training to allow semisupervised learning. it applies this approach to two BERT-based models, namely, MARBERT and AR-BERT. Experimental results show that the GAN extension improves the performance of the BERT-based models, given a few labeled examples. The paper also introduces a 2-stages setup, where it trains the base model extended with GAN component for 5 epochs, then eliminate the GAN component and train the base model alone for another 5 epochs. Using very small training sets, the adopted approach helps the base model for better generalization and faster convergence, with no additional cost at inference time. Adding SS-GAN module on top of BERT-based models, empirically showed enhancements in performance and faster convergence given a few labeled examples of the datasets, which validates our hypothesis. | 970 | 2,123 | 970 |
KEBAP: Korean Error Explainable Benchmark Dataset for ASR and Post-processing | Automatic Speech Recognition (ASR) systems are instrumental across various applications, with their performance being critically tied to user satisfaction. Conventional evaluation metrics for ASR systems produce a singular aggregate score, which is insufficient for understanding specific system vulnerabilities. Therefore, we aim to address the limitations of the previous ASR evaluation methods by introducing the Korean Error Explainable Benchmark Dataset for ASR and Post-processing (KEBAP). KE-BAP enables comprehensive analysis of ASR systems at both speech-and text levels, thereby facilitating a more balanced assessment encompassing speech recognition accuracy and user readability. KEBAP provides 37 newly defined speech-level resources incorporating diverse noise environments and speaker characteristics categories, also presenting 13 distinct textlevel error types. This paper demonstrates detailed statistical analyses of colloquial noise categories and textual error types. Furthermore, we conduct extensive validation and analysis on commercially deployed ASR systems, providing valuable insights into their performance. As a more fine-grained and real-world-centric evaluation method, KEBAP contributes to identifying and mitigating potential weaknesses in ASR systems. * Equally contributed, ‡ Corresponding author 1 Recognition accuracy is the measure of accurately perceiving phonemes as they are externally expressed, regardless of user input quality | Automatic speech recognition (ASR) is a task that recognizes speech and converts it into text, and it is getting more and more attention with the development of voice interface applications and devices such as Alexa, Siri, and Cortana if the ASR model accurately recognizes the input voice, the user's readability may decrease. This is because humans do not always utter perfect sentences in the real world (e.g., incomplete utterances, sighs, etc.). To achieve balanced ASR results in this trade-off situation, it is required to consider both recognition accuracy and user readability. In terms of recognition accuracy, various ASR evaluation metrics such as word error rate (WER) (Woodard and Nelson) and character error rate (CER) However, it is crucial to recognize that even if quantitative evaluation scores are similar, the qualitative aspects of the ASR results may not necessarily align. Conventional research methods, which focus on accuracy or user readability, compute quantitative scores based on the degree of alignment between inputs and outputs. This approach falls short in classifying potential error types or pinpointing the model's specific weaknesses, thus lacking explanatory power for real-world ASR model outputs. This deficiency hinders the establishment of clear directions for model improvement. To this end, datasets that aim to enhance the explanatory power of ASR evaluations by considering noisy environments or speaker characteristics have been published recently We employ KEBAP to conduct an empirical analysis of the correlations between speech-level noise types and textual error types. Moreover, leveraging ChatGPT (OpenAI-Blog, 2022), we explore the potential of language models in discovering the vulnerabilities of ASR models. Our observations highlight KEBAP's significant interpretability of ASR model diagnostics and shed light on the pressing need for research on diagnostic tasks for ASR systems. Our work sets the stage for more real-world-oriented evaluations of ASR systems and can contribute to the advancements in this domain. | In the real-world scenario, mitigating the tradeoff between recognition accuracy and user readability is crucial. To address this, we propose KE-BAP, emphasizing the importance of considering both aspects. A detailed explanation is as follows. Firstly, in real-world speech recognition, it is essential to consider the accuracy of model and end-user satisfaction simultaneously. To facilitate this, we propose to map the accuracy of the ASR model to 'speech-level noises' and user readability to 'text-level errors' to mitigate this inherent tradeoff. From the perspective of the accuracy of the ASR model, it should output the recognition results 'as heard,' regardless of the quality of the user-provided input. Conversely, from the standpoint of the end-user receiving the result, satisfaction increases when the output is presented in a refined state, despite any errors in the initial input. For instance, if a speaker stammers during their speech, the ASR model would likely deem its output more accurate if it recognizes and outputs all the words uttered. However, this would likely result in lower readability from the user's perspective. In addition, previous research lacks an adequate number of error types for a detailed diagnosis. Since benchmarks measure performance with quantitative metrics, it is crucial to subdivide characteristics for a more detailed diagnosis. In industry contexts, communication between model and service teams is critical. When there's an issue with the model, clear criteria for the data flywheel significantly facilitate communication. That is, distinguishing the error type criteria for speech-and textlevel aids in detailed diagnosis for model improvement. However, conventional benchmark datasets lack sufficient error types for detailed model analysis, leading to extensive usage of human evaluation in real-world settings. Humans can cope using commonsense, even if the criteria are unclear, but existing benchmarks with limited error types fall short. Hence, to solve the explainability issue, we must define error type criteria that consider both the speech-and text-level and create benchmarks to achieve human-level explainability. To enhance the explanatory power of the validation process for ASR models, we define errors Error types at the speech-level refer to factors that trigger inaccuracies in speech recognition situations. For example, identical utterances may be challenging to recognize due to background noise Considering environments inundated with noise, it does not represent a quiet recording situation but rather a condition intertwined with noise. Realworld scenarios frequently involve inputs replete with ambient noise Text-level error types refer to issues that emerge in speech recognition results and must be addressed by post-processing. Since the output of the speech recognizer serves as the input for downstream tasks, it is one of the most significant factors influencing end-user satisfaction. By improving the performance of downstream tasks through quality input and diagnosing the performance of post-processing models through detailed error types, it is possible to enhance end-user satisfaction. Existing datasets that detail error types, such as grammatical error correction (GEC) datasets, do not consider speech recognition situations Table In this work, we propose a comprehensive data construction guideline for the ASR and ASRP dataset, grounded in the application of a GEC dataset. Our methodology encompasses build text-level error corpus, speech recording, noise synthesis, and difficulty annotation. For the efficiency of the task, we choose the 'consensus labeling' method Step 1: Build Text-Level Error Corpus In this study, we employ a human-curated GEC dataset, which encompasses various text-level error types We assess whether the given error types are valid or invalid in the context of speech recognition situations by human evaluators. Invalid types are filtered out, and the type structure is reconfigured In particular, we extract 13 categories that resonate with speech recognition scenarios (e.g., honorific colloquial expression) and reorganize their hierarchy for ease of labeling. Consequently, our refined dataset includes data reflecting 13 error types relevant to speech recognition contexts. Subsequently, we authenticate the quality of the filtered dataset focusing on the alignment between labels and text, and the inclusion of text-level errors with a specific consideration of the speech recognition context. Validation processes proceed with a human supervisor, priorily trained with each error type. Evaluators are presented with an erroneous sentence, its correct counterpart, and a specified error type with the corresponding error span indicated. They are then tasked with assessing whether the sentence contains the presented error types. Sentences deemed to be incorrect are appropriately amended. This procedural framework ensures the generation of a high-quality dataset. Step 2: Speech Recording In the second phase, we request that recording participants incorporate characteristics of interlocutor errors into their recordings by presenting them with speech-level errors and transcription relevant to the respective error types. At most 3 error types are presented, which could include an instance of 'no error type', indicating clean data. The placement of the error within the sentence is non-specific, with the ensurance that it includes only the errors specified. The recording environment should be ensured to be quiet without background noise. Each recorder is instructed to speak as naturally as possible, emulating their speech patterns when interacting with a voice interface application in real-world scenarios. After completing the recording, participants have the opportunity to listen to their own voice, and if they determine that the speech does not meet the criteria, they can re-record it. Participants are required to go through the process of listening to their recorded speech in order to complete the recording task. The detailed information about the workers can be found in Appendix D. Step 3: Synthesis of Background Noise In the next stage, we incorporate background noise into the recording to reflect the noise environment er-ror in the proposed speech-level. The background noise used for this integration is derived directly from recordings of the identified environments. We ensure that the collected noise spans a duration longer than that of the recording file, fostering noise diversity. To mimic real-world situations, we conduct both single and multiple noise syntheses while filtering out instances that are unlikely to co-occur. During noise synthesis, the noise is integrated as though it is ambient background noise, designed to be audible at the onset of the voice file. Noise is composited into the recording by randomly excising sections, thus ensuring variation within sounds, even when they are categorized under the same noise type. Step 4: Difficulty Annotation Difficult data for ASR models refers to data that is not frequently encountered in the training data and is imbalanced, varying depending on the user We filter out cases that cannot occur in speech recognition situations, such as typing language errors caused by keyboard language switching, in the GEC dataset. After the filtering process, the textlevel distribution is shown in Overall, the average Krippendorff's α for interannotator agreement of each annotation level is 0.476. The label distribution of the collected data is shown in Figure Each data includes single or multiple speech-level characteristics. Figure In this section, we assess the specific capabilities of commercialized ASR models using KEBAP. To achieve this, we conduct a detailed correlation analysis of commercialized systems such as Google Cloud Speech-to-Text Table Both the Google and Naver ASR systems exhibit significant error propagation in the domain of public transportation. Specifically, for Google, there is a high correlation between speech-level errors and text-level errors in punctuation, spacing, and replace. On the other hand, Clova shows a strong correlation between speech-level errors and textlevel errors in punctuation, spacing, and addition. Furthermore, google showed robustness in the nature ambient, but Clova showed relatively more text errors. Figure This analysis provides valuable insights into the correlation between speech-level noise, particularly noisy environments, and text-level errors in the Google and Clova ASR systems. The varying impact of different types of speech-level characteristics on text-level errors highlights the need for further granularity in categorizing these types. Even if models demonstrate similar performance, the individual capabilities of each model can differ. This demonstrates that KEBAP helps enhance the interpretability of ASR model verification Table With recent advancements in Large Language Model (LLM) development, most tasks are converging towards LLM-based approaches. In this study, we explore the potential of using Chat-GPT (OpenAI-Blog, 2022) as a diagnostic tool for ASR results. Understanding error types is essential for verifying the models, and to measure this understanding, we perform an error type classification task. ChatGPT is utilized to classify text-level error types based on provided sentences in a few-shot setup. The specific prompt used for this experiment is listed in Appendix F. We task ChatGPT with classifying all text-level errors occurring in the ASR results. However, as seen in the examples (please refer to Appendix F.2), it is evident that ChatGPT not only misclassifies text-level errors but also struggles more when multiple errors are present within a sentence. Although LLMs are converging towards covering various tasks, they exhibit limitations in performing diagnostic tasks for commercial systems. This indicates that while various tasks may converge with LLM, the diagnostic domain for the proposed model is far from convergence with LLMs, highlighting the need for further research. In the real-world, ASR results involve a trade-off between recognition accuracy and user readability, thus requiring a balanced consideration of these factors. To provide guidance for improving model performance, it is necessary to enhance interpretability, which entails considering both speech-level accuracy and text-level user readability. To this end, we propose Korean Error Explainable Benchmark Dataset for ASR and Post-processing (KEBAP) for diagnosing and validating models by segmenting error types while considering both speech-and text-level. To facilitate the construction process, we utilize a GEC dataset that includes text-level errors and structure the process into validation, recording, synthesis of background noise, and difficulty tagging stages, employing consensus labeling within each stage to enhance the efficiency and quality of the task. We performed a detailed diagnostic analysis of the commercialization systems using KEBAP. Furthermore, the proposed task falls into a domain that is challenging for ChatGPT to cover, and it indicates the need for further research to achieve a closer approximation to real-world diagnostics. We demonstrated that KEBAP contributes to enhancing the interpretability of the model's weaknesses. This study has the limitation of only building data for the Korean language. Additionally, as this paper proposes a new task, it was not able to conduct extensive quantitative analyses by comparing it with existing models, which remains a limitation. However, this paper made a contribution by proposing new data and tasks and making them publicly available. We discuss the main ethical considerations of KE-BAP benchmark we presented: (1) Privacy. KE-BAP benchmark is constructed to acquire factual dataset, and does not contain privacy issues. (2) Human evaluation. During data evaluation process, we paid human workers the legal wage determined by the average time of evaluation and local labor compensation standards. We also guided them to take a rest when they are in a state of fatigue during work. (3) Potential problems. While principled measures are taken to ensure the quality of the dataset, there might still be potential problems with the dataset quality. Post-processing Model Post-processing serves an important role in quality enhancement across various fields by modifying the distorted output into appropriate statements. For instance, in the field of optical character recognition (OCR), conventional approaches such as manual, lexical, and statistical methods have been used As another field, machine translation (MT) often utilizes the following methods. Post-processing research is being carried out in automatic post-editing (APE) to improve translation quality by adopting transfer learning (Correia and Martins, 2019). Concurrently, in the grammatical error correction (GEC) field, transformers and the copy mechanism are used to correct spelling and grammatical errors in MT results ASR Post-Processing Model ASR postprocessing (ASRP) involves the detection and correction of errors in the output of an ASR, distinguishing it from simple error correction in that it considers user-friendliness as an additional aspect. This approach can improve the final quality of statements without modifying the ASR system structure. For instance, in specialized fields like the medical domain, attempts have been made to eliminate punctuation errors in ASR through post-processing Recent studies strive to improve ASRP performance by utilizing the results derived from ASR. The availability of suitable datasets is imperative for the active progression of ASRP. Previously, post-processing studies have been conducted with ASR datasets. Transcription hypotheses obtained by decoding audio data using an ASR model are used to align hypothesis words with the reference (correct) transcription. The process of labeling errors and nonerrors is facilitated by employing the minimum edit distance To mitigate the problem of insufficient training data, methodologies that synthesize data via data augmentation methods have been proposed Spacing encapsulates instances contravening standard spacing conventions. Punctuation entails cases where punctuation is omitted or misapplied in Korean sentences-for instance, when 'Can I teach?' is interpreted as 'Can I teach.' Numerical encompasses cases where number conversion fails, such as when 'Ahead of the three-month schedule' is interpreted as 'Bill 2, 3-month schedule'. Spelling and Grammar consists of ten detailed subcategories. Remove designates cases where some word components are not recognized, or endings or particles are missing-for example, when 'The champion is in the final' is misinterpreted as 'Champion final'. Addition involves cases where the same word is repeated or unutilized particles or endings are appended. For instance, when 'World's fruits, fish, and meat' is interpreted as 'World's world's fruits, fish, and meat'. Replace refers to instances where one word is substituted with another-for example, when 'Apply the filter.' is interpreted as 'Wear the pizza'. Separation refers to instances where consonants and vowels in the target utterance are separated, exemplified when 'The discount applies as it is.' is interpreted as 'Discount app -lise as it is.'. Foreign word conversion refers to cases where words deviate from standard foreign word pronunciation or some syllables are incorrectly converted from English to Korean or vice versa. For example, when 'Brazil's Samba Festival' is interpreted as 'Brazil's SsamBap Festival,' or 'I prefer to use ATM.' is interpreted as 'I prefer to use hm.'. Spelling is bifurcated into two types: Graphemeto-Phoneme (G2P) and Consonant vowel conversion. G2P pertains to instances where a character is recognized per its pronunciation. Consonant vowel conversion refers to instances where phonemic units are incorrectly spelled. Post-position refers to cases where different particles are used or omitted-for example, when 'Ordinary high school students' is interpreted as 'Ordinary at high school students.' Syntax involves cases where the grammatical interpretation remains valid, but the semantic interpretation varies. Finally, neologism refers to cases where the target word and its meaning and pronunciation are dissimilar and are not included in Korean vocabulary. The detailed demographic information is provided in We believe that providing difficulty information facilitates the analysis of weaknesses in ASR models. We extracted an equal number of samples for each difficulty level and analyzed them. Analyzing the details based on different difficulty levels can be employed to enhance the interpretability of the ASR model. For example, in the case of Google, experimental results show that the correlation from 'Terminal' speech-level type to 'Punctuation' text-level type is strong for easy level, 'Construction site' speech-level type to 'Addition' text-level type for medium level, and 'Terminal' speech-level type to 'Replace' or 'Remove' textlevel type for hard level. For Clova, the tendency of 'Replace' text-level type in 'Individual transportation' speech-level type is strongest at easy level, and it is strongly related to 'Syntax' and 'Replace' text-level type at medium level. At the hard level, it has a strong tendency to 'Remove' and 'Syntax' text-level types. | 1,471 | 2,076 | 1,471 |
Scheduled Multi-task Learning for Neural Chat Translation | Neural Chat Translation (NCT) aims to translate conversational text into different languages. Existing methods mainly focus on modeling the bilingual dialogue characteristics (e.g., coherence) to improve chat translation via multi-task learning on small-scale chat translation data. Although the NCT models have achieved impressive success, it is still far from satisfactory due to insufficient chat translation data and simple joint training manners. To address the above issues, we propose a scheduled multi-task learning framework for NCT. Specifically, we devise a three-stage training framework to incorporate the large-scale in-domain chat translation data into training by adding a second pre-training stage between the original pre-training and fine-tuning stages. Further, we investigate where and how to schedule the dialogue-related auxiliary tasks in multiple training stages to effectively enhance the main chat translation task. Extensive experiments on four language directions (English↔Chinese and English↔German) verify the effectiveness and superiority of the proposed approach. Additionally, we will make the large-scale indomain paired bilingual dialogue dataset publicly available for the research community. 1 | A cross-lingual conversation involves speakers in different languages (e.g., one speaking in Chinese and another in English), where a chat translator can be applied to help them communicate in their native languages. The chat translator bilaterally converts the language of bilingual conversational text, e.g. from Chinese to English and vice versa To further improve the chat translation performance through modeling dialogue characteristics (e.g., coherence), inspired by previous studies To address the above issues, we present a Scheduled Multi-task Learning framework (SML) for NCT, as shown in Fig. We validate our SML framework on two datasets: BMELD Our contributions are summarized as follows: • We propose a scheduled multi-task learning framework with three training stages, where a gradient-based scheduling strategy is designed to fully exert the auxiliary tasks' advantages for the main NCT task, for higher translation quality. • Extensive experiments on four chat translation tasks show that our model achieves new stateof-the-art performance and outperforms the existing NCT models by a significant margin. • We contribute two large-scale in-domain paired bilingual dialogue corpora (28M for En↔Zh and 18M for En↔De) to the research community. 2 Background: Conventional Multi-task Learning for NCT We introduce the conventional multi-task learning framework | In a bilingual conversation, we assume the two speakers have alternately given utterances in different languages for u turns, resulting in X 1 , X 2 , X 3 , ..., X u and Y 1 , Y 2 , Y 3 , ..., Y u on the source and target sides, respectively. Among these utterances, X 1 , X 3 , X 5 , ..., X u are originally spoken and Y 1 , Y 3 , Y 5 , ..., Y u are the corresponding translations in the target language. Similarly, Y 2 , Y 4 , Y 6 , ..., Y u-1 are originally spoken and X 2 , X 4 , X 6 , ..., X u-1 are the translated utterances in the source language. According to languages, we define the dialogue history context of X u on the source side as The goal of an NCT model is to translate X u to Y u with dialogue history context C Xu and C Yu . The NCT model In the encoder, it takes [C Xu ; X u ] as input, where [; ] denotes the concatenation. The input embedding consists of word embedding WE, position embedding PE, and turn embedding TE: where WE ∈ R |V |×d and TE ∈ R |T |×d . 5 When computation in the encoder, words in C Xu can only be attended by those in X u at the first encoder layer while C Xu is masked at the other layers, which is the same implementation as in In the decoder, at each decoding time step t, the top-layer (L-th) decoder hidden state h L d,t is fed into a softmax layer to predict the probability distribution of the next target token: , where Y u,<t denotes the preceding tokens before the t-th time step in the utterance Finally, the training loss is defined as follows: (1) To generate coherent translation, The training objective of this task is formulated as: is the L-th decoder hidden state at the t-th decoding step, W m and b m are trainable parameters. XRG. Similar to MRG, the NCT model is also jointly trained to generate the corresponding utterance Y u which is coherent to the given dialogue 5 |V |, |T | and d denote the size of shared vocabulary, maximum dialogue turns, and the hidden size, respectively. history context C Xu in the source language: ), where W c and b c are trainable parameters. NUD. The NUD task aims to distinguish whether the translated text is coherent to be the next utterance of the given dialogue history context. Specifically, the positive and negative samples are firstly constructed: (1) the positive sample (C Yu , Y u + ) with the label ℓ = 1 consists of the target utterance Y u and its dialogue history context C Yu ; (2) the negative sample (C Yu , Y u -) with the label ℓ = 0 consists of the identical C Yu and a randomly selected utterance Y u -from the preceding context of Y u . Formally, the training objective of NUD is defined as follows: where H Yu and H C Yu denote the representations of the target utterance Y u and C Yu , respectively. Concretely, H Yu is calculated as while H C Yu is defined as the encoder hidden state h L e,0 of the prepended special token '[CLS]' of C Yu . W n is the trainable parameter of the NUD classifier and the bias term is omitted for simplicity. With the main chat translation task and three auxiliary tasks, the total training objective of the conventional multi-task learning is formulated as: where α is the balancing factor between L NCT and other auxiliary objectives. In this section, we introduce the proposed Scheduled Multi-task Learning (SML) framework, including three stages: general pre-training, indomain pre-training, and in-domain fine-tuning, as shown in Fig. For the second in-domain pre-training, we firstly build an in-domain paired bilingual dialogue data and then conduct pre-training on it. To construct the paired bilingual dialogue data, we firstly crawl the in-domain consecutive movie subtitles of En↔Zh and download the consecutive movie subtitles of En↔De on related websites According to the finding that multi-task learning can enhance the NCT model yield better performance. Then, we conclude some multi-task learning findings that could motivate us to investigate how to use these auxiliary tasks well. XNUD. Similar to the NUD task described in § 2.3, the XNUD aims to distinguish whether the translated text is coherent to be the next utterance of the given cross-lingual dialogue history context. Compared to the NUD task, the different point lies in the cross-lingual dialogue context history, i.e., a positive sample (C Xu , Y u + ) with the label ℓ = 1 and a negative sample (C Xu , Y u -) with the label ℓ = 0. Formally, the training objective of XNUD is defined as follows: where H C Xu denotes the representation of C Yu , which is calculated as same as H C Yu in NUD. W x is the trainable parameter of the XNUD classifier and the bias term is omitted for simplicity. Findings. Based on four auxiliary tasks (MRG, XRG, NUD, and XNUD), we investigate in which stage in Fig. • Each auxiliary task can always bring improvement compared with the NCT model w/o task; • By contrast, XRG and XNUD tasks perform relatively poorly in the final fine-tuning stage than MRG and NUD tasks; • Some tasks used only in one stage (e.g., XRG and XNUD in the second stage) perform better than being used in both stages, revealing that different auxiliary tasks may prefer different stages to exert their advantages; (one best setting seems that all tasks are used in the second stage while only MRG and NUD tasks are used in the final fine-tuning stage.) • Using all auxiliary tasks in a conventional multi-task learning manner does not obtain significant cumulative benefits. Given the above findings, we wonder whether there exists a strategy to dynamically schedule them to exert their potential for the main NCT task. Inspired by The core ideas behind the gradient-based SML algorithm are: (1) when the cosine similarity between g k and g nct is positive, i.e., the gradient projection g ′ k is in the same gradient descent direction with the main NCT task, i.e., Fig. Our training process includes three stages: the first pre-training stage on the general-domain sentence pairs (X, Y ): the second in-domain pre-training stage, and the final in-domain fine-tuning stage on the chat translation data: where T is the auxiliary tasks set and we keep the balancing hyper-parameter α. Although the form of L k is the same with Eq. 2, the gradient that participates in updating model parameters is different where it depends on the gradient descent direction of the NCT task in Eq. 4. At inference, all auxiliary tasks are not involved and only the NCT model after scheduled multi-task fine-tuning is applied to chat translation. Datasets. The training of our SML framework consists of three stages: (1) pre-train the model on a large-scale sentence-level NMT corpus (WMT20 Following In this paper, we adopt the settings of standard Transformer-Base and Transformer-Big in Sentence-level NMT Systems. Trans. w/o FT and Trans. Context-aware NMT Systems. Dia-Trans. CPCC CSA-NCT In Tab. 2, We report the main results on En↔Zh and En↔De under Base and Big settings. In Tab. 3, we present additional results on En↔Zh. Results on En↔Zh. Under the Base setting, our model significantly outperforms the sentencelevel/context-aware baselines by a large margin (e.g., the previous best "CSA-NCT"), 4.58↑ on En→Zh and 4.06↑ on Zh→En, showing the effectiveness of the large-scale in-domain data and our scheduled multi-task learning. In terms of TER, SML also performs best on the two directions, 5.0↓ and 4.3↓ than "CPCC" (the lower the better), respectively. Under the Big setting, our model consistently surpasses all existing systems once again. The rows 0∼2 use the pre-training-then-fine-tuning (i.e., two-stage) paradigm while row 3 is the proposed threestage method. For a fair comparison, the final finetuning stage of rows 0∼3 is all trained in the conventional multi-task training manner and the only difference is the usage of the in-domain data. Specifically, row 0 denotes without using the in-domain data. Row 1 denotes that we incorporate the in-domain data into the first pre-training stage ( 1 ⃝). Row 2 denotes that we introduce the in-domain data into the fine-tuning stage ( 2 ⃝). Row 3 denotes that we add a second pre-training stage to introduce the in-domain data. Additional Results. Tab. 2 presents our overall model performance, though, strictly speaking, it is unfair to directly compare our approaches with previous ones. Therefore, we conduct additional experiments in Tab. 3 under two settings: (i) using the original pre-training-then-fine-tuning framework without introducing the large-scale in-domain data (i.e., "Two-stage w/o data" group); (ii) using the proposed three-stage method with the largescale in-domain data (i.e., "Three-stage w/ data" group). And we conclude that (1) the same model (e.g., SML) can be significantly enhanced by the second in-domain pre-training stage, demonstrating the effectiveness of the second pre-training on the in-domain data; (2) our SML model always exceeds the conventional multi-task learning model "M-NCT" in both settings, indicating the superiority of the scheduled multi-task learning strategy. We conduct ablation studies in Tab. 4 and Tab. 5 to answer the following two questions. Q1: why a three-stage training framework? and Q2: why the scheduled multi-task learning strategy? To answer Q1, in Tab. 4, we firstly investigate the effect of the large-scale in-domain chat translation data and further explore where to use it. Firstly, the results of rows 1∼3 substantially outperform those in row 0, proving the availability of incorporating the in-domain data. Secondly, the results of Table To answer Q2, we investigate multiple multitask learning strategies in Tab. 5. Firstly, the results of row 3 are notably higher than those of rows 0∼2 in both language directions, obtaining significant cumulative benefits of auxiliary tasks than rows 0∼2, demonstrating the validity of the proposed SML strategy. Secondly, the results of row 3 vs row 4 show that the inverse gradient projection of auxiliary tasks also has a positive impact on the model performance, which may prevent the model from overfitting, working as a regularizer. All experiments show the superiority of our scheduled multi-task learning strategy. Inspired by 1. semantically coherent with the dialogue history? 2. fluent and grammatically correct? Firstly, we randomly sample 200 conversations from the test set of BMELD in En→Zh. Then, we use 6 models in Tab. 6 to generate translated utterances of these sampled conversations. Finally, we assign the translated utterances and their corre- sponding dialogue history utterances in the target language to three postgraduate human annotators, and then ask them to make evaluations (0/1 score) according to the above two criteria, and average the scores as the final result. Tab. 6 shows that our model generates more coherent and fluent translations when compared with other models (significance test, p < 0.05), which shows the superiority of our model. The inter-annotator agreements calculated by the Fleiss' kappa We measure dialogue coherence as sentence similarity following We investigate the effect of the XNUD task. As shown in Tab. 8, the "M-NCT" denotes the multitask learning model jointly trained with four auxiliary tasks in conventional manner. After removing the XNUD task, the performance drops to some extend, indicating that the new XNUD task achieves further performance improvement based on three existing auxiliary tasks Neural Chat Translation. The goal of NCT is to train a dialogue-aware translation model using the bilingual dialogue history, which is different from document-level/sentence-level machine translation Multi-task Learning. Conventional multi-task learning (MTL) This paper proposes a scheduled multi-task learning framework armed with an additional in-domain pre-training stage and a gradient-based scheduled multi-task learning strategy. Experiments on En↔Zh and En↔De demonstrate that our framework significantly improves translation quality on both BLEU and TER metrics, showing its effectiveness and generalizability. Human evaluation further verifies that our model yields better translations in terms of coherence and fluency. Furthermore, we contribute two large-scale in-domain paired bilingual dialogue datasets to the research community. As mentioned in § 4.1, our experiments involve the WMT20 dataset for general-domain pre-training, the newly constructed in-domain chat translation data for the second pre-training (please refer to § 3.1), and two target chat translation corpora, BMELD WMT20. Following To pre-process the raw data, we employ a series of open-source/in-house scripts, including full-/halfwidth conversion, unicode conversation, punctuation normalization, and tokenization BMELD. The dataset is a recently released English↔Chinese bilingual dialogue dataset, provided by | 1,231 | 1,375 | 1,231 |
Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling | Written language contains stylistic cues that can be exploited to automatically infer a variety of potentially sensitive author information. Adversarial stylometry intends to attack such models by rewriting an author's text. Our research proposes several components to facilitate deployment of these adversarial attacks in the wild, where neither data nor target models are accessible. We introduce a transformerbased extension of a lexical replacement attack, and show it achieves high transferability when trained on a weakly labeled corpusdecreasing target model performance below chance. While not completely inconspicuous, our more successful attacks also prove notably less detectable by humans. Our framework therefore provides a promising direction for future privacy-preserving adversarial attacks. | The widespread use of machine learning on consumer devices and its application to their data has sparked investigation of security and privacy researchers alike in correctly handling sensitive information Privacy-preserving defenses against such inferences can be found in the field of adversarial Adversarial attacks on NLP are predominantly aimed at demonstrating vulnerabilities in existing algorithms or models, such that they might be fixed, or explicitly improved through adversarial training. Consequently, most related work focuses on white or black-box settings, where all or part of the target model is accessible (e.g., its predictions, data, parameters, gradients, or probability distribution) to fit an attack. The current research, however, does not intend to improve the targeted models; rather, we want to provide the attacks as tools to protect online privacy. This introduces several constraints over other NLP-based adversarial attacks, as it calls for a realistic, in-the-wild scenario of application. Firstly, authors seeking to protect themselves from stylometric analysis cannot be assumed to be knowledgeable about the target architecture, nor to have access to suitable training data (as the target could have been trained on any domain). Hence, we cannot optimally tailor attacks to the target, and need an accessible method of mimicking it to evaluate the obfuscation success. To facilitate this, we use a so-called substitute model, which for our purposes is an author profiling classifier trained in isolation (with its own data and architecture) that informs our attacks. Attacks fitted on substitute models have been shown to transfer their success when targeting models with different architectures, or trained on other data, in a variety of machine learning tasks Secondly, for an obfuscation attack to work in practice (e.g., given a limited post history), it should suggest relevant changes -to-the author's writing on a domain of their choice. This implies the substitute models should be fitted locally, and therefore need to meet two criteria: reliable access to labeled data, and being relatively fast and easy to train. To meet the first criterion, the current research focuses on gender prediction, as: i) Twitter corpora annotated with this variable are by far the largest (and most common), ii) author profiling methods typically use similar architectures for different attributes; therefore, the generalization of attacks to other author attributes can be assumed to a large extent, and, most importantly, iii) As for the attacks, we focus on lexical substitution of content words strongly related to a given label, as those have been shown to explain a significant portion of the accuracy of stylometric models (see e.g., | Stylometry, the study of (predominantly) writing style, dates back several decades It is perhaps for this reason that most obfuscation work uses heuristically-driven, controlled changes such as splitting or merging words or sentences, removing stop words, changing spelling, punctuation, or casing (see e.g., Our work does not assume the attacks to run end-to-end, but with a hypothetical human in the loop. We further opt for techniques that are more likely to find strong semantic mirrors to the original text while making minimal changes. A substitute model (the algorithm, hyper-parameters, and output of which an author can manipulate as desired) is employed to indicate candidate replacement words, and our attacks suggest and rank those against this substitute. Moreover, prior work typically attacks adversaries trained on the same data, whereas we add a transferability measure. Lastly, while au-thor identification has been investigated in the wild Our attack framework extends TextFooler We are given a target classifier f , substitute classifier f , a document D consisting of tokens D i , and a target label y. Here, f is trained on some corpus X, and receives an author's new input text D, where the author provides label y. We denote a class label as ȳ if f (D) predicts anything but y. Our perturbations form adversarial input D ADV , that intends to produce f (D ADV ) = ȳ, and thereby implicitly f (D ADV ) = ȳ. Note that we only submit D to f for evaluating the attack effectiveness, and it is never used to fit the attack itself. To create D ADV , a minimum number of edits is preferred, and thus we rank all words in D by their omission score (similar to e.g., (1) With I D i calculated for all words in D, the top k ranked tokens are chosen as target words T . Four approaches to perturb a target word t ∈ T are considered in our experiments. These operations are referred to as candidates in Algorithm 1. This TF-based substitution embeds t as t using a pre-trained embedding matrix V . C t is selected by computing the cosine similarity between t and all available wordembeddings w ∈ V . We denote cosine similarity with Λ(t, w). A threshold δ is used to keep only reliable candidates Λ(t, w) > δ. Masked Substitution (MB) The embeddingbased substitutions can be replaced by a language model predicting the contextually most likely token. BERT Dropout Substitution (DB) A method to circumvent the former (i.e., BERT's masked prediction limitations for lexical substitution), was presented by Heuristic Substitution To evaluate the relative performance of the techniques we described before, we employ several heuristic attacks as baselines. In the order of Table Given C t , either all, or only the highest ranked candidate can be accepted as-is. Alternatively, all D can be filtered by submitting them to checks, or reranked based on their semantic consistency with D. These operations are referred to as rank/filter in Algorithm 1-both of which can be executed. Part-of-Speech and Document Encoding TF employs two checking components: first, it removes any c that has a different POS tag than t. If multiple D exist so that f (D ) = ȳ, it selects the document D which has the highest cosine similarity to the Universal Sentence Encoder (USE) embedding BERT Similarity Zhou et al. ( Lastly, we include a weakly labeled author profiling corpus by Preprocessing & Sampling All three corpora were tokenized using spaCy For the extension of TF, we re-implemented code We adopt the same parameter settings as Jin et al. (2020) throughout our TF experiments: they set N (considered synonyms) and δ (cosine similarity minimum) empirically to 50 and 0.7 respectively. For MB and DB, we capped T at 50 and top-k at 10 (to improve speed). For DB, we follow For f and f we require (preferably fast) pipelines that achieve high accuracy on author profiling tasks, and are sufficiently distinct to gauge how well our attacks transfer across architectures, rather than solely across corpora. As state-of-the-art algorithms have not yet proven to be sufficiently effective for author profiling Logistic Regression Logistic Regression (LR) trained on tf•idf using uni and bi-gram features proved a strong baseline in author profiling in prior work. The simplicity of this classifier also makes it a substitute model that can realistically be run by an author. No tuning was performed: C is set to 1. The New Groningen Author-profiling Model (N-GrAM) from To summarize (and see Note that we are predominantly interested in transferability, and would therefore like to test as many combinations of data and architecture access limitations as possible. If we assume an author does not have access to the data, the substitute classifier is trained on any other data than the Volkova et al. corpus. If we assume the author does not know the target model architecture, the target model is N-GrAM (rather than LR). A full model transfer setting (in both data and architecture) will therefore be, e.g.: data f = Emmery et al., data f = Volkova et al., f = LR, and f = NGrAM. Finally, for comparison to an optimal situation, we test a setting where we do have access to the adversary's data. Metrics The obfuscation success is measured as any accuracy score below chance level performance, which given our test sample is 55%. We would argue that random performance is preferred in scenarios where the prediction of the opposite label is undesired. For the current task, however, any accuracy drop to around or lower than chance level satisfies the conditions for successful obfus- cation. 12 To evaluate the semantic preservation of the attacked sentences, we calculate both METEOR Human Evaluation For the human evaluation, we sampled 20 document pieces (one or more tweets) for each attack type in the best performing experimental configuration. A piece was chosen if it satisfied these criteria: i) contains changes for all three attacks, ii) consists of at least 15 words (excluding emojis and tags), and iii) does not contain obvious profanity. 14 All 60 document pieces of the three models were shuffled, and the 20 original versions were appended at the end (so that 'correct' pieces were seen last). Each substitute model therefore has 80 items for evaluation. While in prior work it is common to rate semantic consistency, fluency, and label a text (see e.g., The items were rated individually; the human evaluators did not know beforehand that different versions of the same sentences were repeated, nor 12 If an attack drops accuracy to 0%, this effectively flips (in case of a binary label) the label. This label might also be undesired by the author (e.g., being classified as having polar opposite political views). This implies the target model being maximally unsure about the classification is desirable. 13 As we alluded to in Section 4.1, both corpora used to train our substitute models were in fact not reference corpora for author profiling, and can therefore be considered as suboptimal, disjoint domains. The Huang et al. corpus in particular shows a strong domain shift (see Table The results for all attacks are shown in Table Transferability can be assessed by comparing the LR and N-GrAM (NG) columns. Globally it can be observed that the substitute models trained on the Emmery et al. corpus systematically outperform those trained on Huang et al.; both for the settings where the adversary's architecture is known (LR), and where it is unknown (NG). This matches our expectations from the observed domain shift. Our results also show that a noticeable decrease in obfuscation performance occurs (10-30% increased target model performance) when the attacks are transferred to different data and another model. In contrast, as can be observed from the last two columns in Table Looking at the Top-1, Check and Check brackets (Table As 15 Jin et al. ( The metrics in Figure The results in Table 6 Discussion and Future Work We have demonstrated the performance of author attribute obfuscation under a realistic setting. Using a simple Logistic Regression model for candidate suggestion, trained on a weakly labeled corpus collected in a day, the attacks successfully transferred to different data and architectures. This is a promising result for future adversarial work on this task, and its practical implementation. It remains challenging to automatically evaluate how invasive the required number of changes are for successful obfuscation-particularly to an author's message consistency as a whole. However, in practice such considerations could be left up to the author. In this human-in-the-loop scenario, a more extensive set of candidates could be suggested, and their effect on the substitute model shown interactively. This way, the attacks can be manually tuned to find a balance of effectiveness, inconspicuousness, and to guarantee semantic consistency. It would also show the author how their writing style affects potential future inferences. Regarding the performance of the attacks: we demonstrated the general effectiveness of contextual language models in retrieving candidate suggestions. However, the quality of those candidates might be improved with more extensive rule-based checks; e.g., through deeper analyses using parsing. Nevertheless, such venues leave us with a core limitation of rewriting language, and therefore more broadly NLP: while the Masked attacks seemed more successful in our experiments, after manual inspection of the perturbations Dropout was found to often be semantically closer (see also Table As such, we would argue that future work should focus on making as few perturbations as possible, retaining only the minimum amount of required obfuscation success. Given this, the other constraints become less relevant; one could generate short sentences (e.g., a single tweet) that might be semantically or contextually incorrect, but if it is a message in a long post history, it will hardly be detectable or intrusive. This would require certain triggers (as demonstrated by In our work, we argued realistic adversarial stylometry should be tested on transferability in settings where there is no access to the target model's data or architecture. We extended previous adversarial text classification work with two transformer-based models, and studied their obfuscation success in such a setting. We showed them to reliably drop target model performance below chance, though human detectability of the attacks remained above chance. Future work could focus on further minimizing this detection under our realistic constraints. | 807 | 2,766 | 807 |
Learning to Generate Task-Specific Adapters from Task Description | Pre-trained text-to-text transformers such as BART have achieved impressive performance across a range of NLP tasks. Recent study further shows that they can learn to generalize to novel tasks, by including task descriptions as part of the source sequence and training the model with (source, target) examples. At test time, these fine-tuned models can make inferences on new tasks using the new task descriptions as part of the input. However, this approach has potential limitations, as the model learns to solve individual (source, target) examples (i.e., at the instance level), instead of learning to solve tasks by taking all examples within a task as a whole (i.e., at the task level). To this end, we introduce HYPTER, a framework that improves text-to-text transformer's generalization ability to unseen tasks by training a hypernetwork to generate task-specific, light-weight adapters from task descriptions. Experiments on ZEST dataset and a synthetic SQuAD dataset demonstrate that HYPTER improves upon fine-tuning baselines. Notably, when using BART-Large as the main network, HYPTER brings 11.3% comparative improvement on ZEST dataset. 1 | Pre-trained text-to-text models unseen tasks with the source sequence containing new task descriptions. While this initial attempt shows positive results, there are two potential limitations for the direct finetuning approach. (1) Predictions can be sensitive to the task descriptions (or "prompts") that are heuristically designed (2) The model still learns from individual (source, target) examples, instead of learning to solve tasks at a higher level, by explicitly taking multiple examples within a task as a whole (see Fig. In this work, we follow the settings in work that employs a hypernetwork We apply HYPTER to two datasets: ZEST | We study the problem of learning from task description For instance, in the ZEST dataset Our work is built on adapters Overview. Fig. (1) A main network, which is a pre-trained text-totext model. We instantiate the main network with BART-Base/Large Hypernetwork. The hypernetwork consists of an encoder and multiple decoders. The encoder maps the task description d to a latent representation h 0 , while the decoders use h 0 to generate adapter parameters φ. In our work we instantiated the encoder with a RoBERTa-Base model Datasets. We use two datasets that fit our setup. The first one is Zero-shot Learning from Task Descriptions dataset (ZEST, We construct the second dataset from SQuAD v1 Baseline. To demonstrate the efficacy of the HYPTER framework, we compare it to just its first half -the main text-to-text transformer model that we obtain after the first stage of training. This is identical to the fine-tuning baseline method in Main Results. We present the results for ZEST in Table Model Behavior Analysis on ZEST. ZEST dataset provides a comprehensive analysis protocol by splitting tasks into different generalization types (base, paraphrase, composition, semantic flips, and output structure) and defining four error types (recall, precision, partial, and other). Compared to the BART-Large fine-tuning baseline, our model achieves better performance in "base" and "paraphrase" categories in the ZEST official test set. We also manually inspected dev set predictions produced by the baseline and our model. We found the predictions corrected by our method span across the four error types. In particular, the proposed method flipped two "n/a" predictions into the correct answers in the task "Which royalty was this dog breed popular with?" ("base" category), reducing the recall errors and improving the competence metric. We do not observe more granular model behavioral patterns beyond this point. Study of Data Efficiency. We study whether HYPTER is effective when trained with (1) fewer tasks, while the number of examples per task is unchanged; (2) fewer examples per task, while the number of total tasks is kept constant. We experiment with ZEST and BART-Large, and show the performance in Fig. Zero-shot Learning with Transformers. Zeroshot learning (ZSL) has been explored for various NLP tasks, including text classification Adapters for Transformers. Generators. Hypernetwork Listing 1: Train/Dev/Test Partition in Synthetic SQuAD dataset. 1 "train": ["why were", "what years", "who said", " what percent", "when did", "where do", "who is" , "how are", "what decade", "how does", "how long", "where was", "what has", "which two", " who was", "who were", "where are", "where does" , "what did", "how far We use transformers For text-to-text model fine-tuning, we select learning rate from {1e-5, 3e-5, 5e-5}, and select the total number of epochs from {5, 10, 15, 20, 30} for ZEST and {10, 20, 30, 50, 100} for synthetic SQuAD. We use a fixed batch size of 32. For hypernetwork training, we train up to 100 epochs (one epoch here refers to an iteration over all tasks). We update the hypernetwork every b tasks, and we call b as task batch size. When learning from one task, we sample b ′ examples within this task, and we call b ′ as the example batch size. We greedily and sequentially select adapter width d from {4,8,16,32}, learning rate α from {3e-6, 1e-5, 3e-5, 1e-4}, b from {4,8,16,32}, b ′ from {4,8,16,32}, based on dev set performance. Another reasonable baseline is to fine-tune a textto-text model together with randomly initialized adapters plugged in it. We experiment with this method using BART-Large and list the performance in Table In Table It is worth noting that the efficacy of HYPTER is at the cost of introducing new parameters in the hypernetwork. To generate adapter parameters, more parameters are introduced and trained in the hypernetwork. One may achieve better generalization ability to unseen tasks with larger pre-trained models with billions of parameters. In this case, we consider HYPTER as an alternative by augmenting a medium-sized pre-trained model with a hypernetwork. Meanwhile, we highlight our contribution to be the concept of generating task-specific adapters from descriptions and HYPTER's task-level training procedure. | 1,152 | 640 | 1,152 |
Entity-Focused Dense Passage Retrieval for Outside-Knowledge Visual Question Answering | Most Outside-Knowledge Visual Question Answering (OK-VQA) systems employ a twostage framework that first retrieves external knowledge given the visual question and then predicts the answer based on the retrieved content. However, the retrieved knowledge is often inadequate. Retrievals are frequently too general and fail to cover specific knowledge needed to answer the question. Also, the naturally available supervision (whether the passage contains the correct answer) is weak and does not guarantee question relevancy. To address these issues, we propose an Entity-Focused Retrieval (EnFoRe) model that provides stronger supervision during training and recognizes questionrelevant entities to help retrieve more specific knowledge. Experiments show that our En-FoRe model achieves superior retrieval performance on OK-VQA, the currently largest outside-knowledge VQA dataset. We also combine the retrieved knowledge with state-of-theart VQA models, and achieve a new state-ofthe-art performance on OK-VQA. | Passage retrieval under a multi-modal setting is a critical prerequisite for applications such as outsideknowledge visual question answering (OK-VQA) In this work, we investigate two main drawbacks of recent dense retrievers First, as most retrieval models encode the query and passages as a whole, they fail to explicitly discover entities critical to answering the question Second, on the supervision side, the positive signals are often passages containing the right answers with top sparse-retrieval scores such as BM 25 In order to address these shortcomings, we propose an Entity-Focused Retrieval (EnFoRe) model that improves the quality of the positive passages for stronger supervision. EnFoRe automatically identifies critical entities for the question and then retrieves knowledge focused on them. We focus on entities that improve a sparse retriever's performance if emphasized during retrieval as critical entities. We use the top passages containing both critical entities and the correct answer as positive supervision. Then, our EnFoRe model learns two scores to indicate (1) the importance of each entity given the question and the image and (2) a score that measured how well each entity fits the context of each candidate passage. We evaluate EnFoRe on OK-VQA | Visual Question Answering (VQA) has witnessed remarkable progress over the past few years, in terms of both the scope of the questions Sparse Retrieval: Before the recent proliferation of transformer-based dense passage retrieval models Motivated by the trend toward dense retrievers, previous work has also applied them to OK-VQA. This work proposes an Entity-Focused Retrieval (EnFoRe) model that recognizes key entities for the visual question and retrieves question-relevant knowledge specifically focused on them. Our approach also benefits from stronger passage-retrieval supervision with the help of those key entities. The most relevant work to ours is phrase-based dense passage retrieval. Our EnFoRe model is empowered by a comprehensive set of extracted entities. Entities are not limited to phrases from the question and passages as in Entities from Questions: First, the noun phrases in questions usually reveal critical entities. Following Question-based entities are high precision and narrow down the search space for knowledge retrievers. To complement this, we also collect imagebased entities to help achieve higher recall. Entities from Azure tagging: Following Given the comprehensive set of entities E covering different aspects of the question and image, we introduce an approach to automatically find critical entities and passages containing them. Then, those entities and passages are used during training to provide more substantial supervision. The intuition is that a good passage that fits the visual question's context should mention both the key entities and the correct answer. Also, emphasizing critical entities should improve retrieval performance. Given a question q, we use BM25 We use summed reciprocal ranking instead of reciprocal ranking since it provides more stable scores for evaluating the set of retrieved passages and does not overweight the highest ranked document. Then, for each entity e ∈ E, we retrieve another set of passages P e using an entity-emphasizing query where the entity is appended to the end of the question. Note that the BM25 retriever does not take word order into account, so simply appending entities will not lead to undesired results due to the linguistic disfluency of the query. The final score for an entity S(e) is computed as the difference between the SRR of these two sets of retrieved passages, i.e. S(e) = SRR(P e ) -SRR(P init ). We regard entities with S(e) over a threshold θ as critical entities, i.e. E oracle = {e ∈ E|S(e) > θ}. Qu et al. ( , where p + e denotes the first passage that contains both the right answer and the oracle entity. On average, there are 3.4 new positive passages per question. The negative passages are the same as those in Entity-Focused Retrieval (EnFoRe) automatically recognizes critical entities and retrieves questionrelevant knowledge specifically focused on them. "proj" denotes a projection function that consists of an MLP layer with layer-norm as normalization. Query encoder: As observed by (2) Passage encoder: Following Entity encoder: In order to provide query context for each entity, we append the question and a generated image caption EnFoRe aims to retrieve question-relevant knowledge that focuses on critical entities. Therefore, the similarity metric consists of two parts: a question relevancy term and an entity focus term. Modeling question relevancy: We model the question relevancy term S qp as the inner-product of the query and passage features, i.e. S qp (q, p) = f T q f p . During inference, as the query and passage features are decomposable, maximum inner product search (MIPS) can be applied to efficiently retrieve top passages for the query. Modeling entity focus: The entity focus term consists of two parts, where query features are used to identify critical entities from the set of entities in Sec. 3, and passage features are used to determine whether it contains these key entities. For each entity, we compute the query-entity score S qe (q, e) as the inner-product of the projected query and entity feature, i.e. S qe (q, e) = proj(f q ) T proj(f e ), and we compute the passage-entity score as S pe (p, e) = proj(f p ) T proj(f e ). Then, we combine all of the entities and compute the entityfocused score S qpe per Eq. 5: S qpe (q, p, E) = e∈E σ(S qe (q, e)) × S pe (p, e) e∈E σ(S qe (q, e)) (5) where σ denotes the sigmoid function. Another way to interpret Eq. 5 is to treat it as modeling the conditional distribution Pr(p | q) and consider the entities as hidden variables. The final score S(q, p) for the query q and passage p linearly combines both terms, i.e. S(q, p) = S qp (q, p) + λS qpe (q, p, E), where the weight λ controls the balance between the these two terms. We train our EnFoRe model with a set of training instances consisting of a query containing the visual question with an image, a positive passage, a retrieved negative passage, and the set of entities. We present more details on constructing the training data in Sec. 6.1. We adopt the "R-Neg+IB-All" setting introduced by Qu et al. ( Test MRR@5 P@5 MRR@5 P@5 BM25-Obj other in-batch passages, as negative samplings. Following previous work The training process takes about 45 hours for each model. We save the parameters every 5000 steps and present the best results (MRR@5) on the validation set. The hidden states size is set to 768 following We employ the current state-of-the-art KAT model We change the original explicit knowledge to the knowledge retrieved by our EnFoRe model. As the retrieved passage contains multiple sentences, and usually not all are relevant, we select the most relevant sentence for each passage. Specifically, following Following We present our passage retriever results in Table VQA Scores Q-only We present the VQA performance of incorporating our EnFoRe knowledge in the state-of-theart KAT model in Table In this work, we presented an Entity-Focused Retrieval (EnFoRe) model for retrieving knowledge for outside-knowledge visual questions. The goal is to retrieve question-relevant knowledge focused on critical entities. We first construct an entity set by parsing the question and the image. Then, En-FoRe predicts a query-entity score, predicting how likely it will lead to finding a correct answer, and a passage-entity score showing how likely the entity fits in the context of the passage. These two scores are combined to re-rank the conventional querypassage relevancy score. EnFoRe demonstrates the clear advantages of improved multi-modal knowledge retrieval and helps improve VQA performance with its improved retrieved knowledge. Our EnFoRe model is empowered by a comprehensive set of parsed entities from the question and the image. However, as shown in the failure cases in the experiment section, those entities may contain detection errors that lead to undesired results. In addition, during training, we adopt a fully automatic scheme for annotating critical entities assuming they can help a sparse retriever achieve better SRR results; however, explicit human annotation could potentially improve the quality of the critical entities identified. While we have explored collecting both question-based and image-based entities in our current approach, they are not fully adequate in that ideally it could be beneficial to include not only the relevant objects for the visual question but other kinds of descriptors that may act as useful clues for knowledge retrieval. Another limitation of the current approach is that we encode each entity separately, ignoring the relationships between entities, which could be helpful for knowledge retrieval. In Figure | 1,010 | 1,278 | 1,010 |
EnsLM: Ensemble Language Model for Data Diversity by Semantic Clustering | Natural language processing often faces the problem of data diversity such as different domains, themes, styles and so on. Therefore, a single language model (LM) is insufficient to learn all knowledge from diverse samples. To solve this problem, we firstly propose an autoencoding topic model with mixture prior (mATM) to perform clustering for the data, where the clusters defined in semantic space describe the data diversity. Having obtained the clustering assignment for each sample, we develop the ensemble LM (En-sLM) with the technique of weight modulation. Specifically, EnsLM contains a backbone which is adjusted by a few modulated weights to fit for different sample clusters. As a result, the backbone learns the shared knowledge among all clusters while modulated weights extract the cluster-specific features. EnsLM can be trained jointly with mATM with flexible LM backbone. We evaluate the effectiveness of both mATM and EnsLM on different language understanding and generative tasks. | It is common knowledge in modern natural language processing (NLP) that natural language varies greatly across domains, themes, styles, genres and many other linguistic nuances (Van der Data selection is a commonly used strategy to handle diversity in data (Moore and Lewis, 2010; Inspired by their works and to move beyond, in this paper, we find the semantics learned by topic modeling Our proposed mATM and EnsLM enjoy the following distinguished properties: • The mATM learns the mixture-prior latent semantic space to define a soft clustering assignment for each sample. • Guided by clustering assignments that describe the data diversity, EnsLM learns both shared and cluster-specific knowledge by weight modulations. • Joint training of mATM and EnsLM improves the performance of both on many NLP tasks. | For NLP, topic modeling (TM) 3 Autoencoding topic model with mixture prior We firstly describe one of the most popular topic models, latent Dirichlet allocation (LDA) For a document containing D words as where φ k is a probability distribution over the vocabulary, LDA defines the generative process of w in Algorithm 1, where θ ∈ R K + is the topic proportion with α as the prior parameter. Given Φ, a popular approximation for efficient inference of LDA is mean-field variational inference, which tries to maximize the evidence lower bound (ELBO) of marginal data log likelihood as ELBO = E q(θ) [log p(w|θ, Φ)]-KL[q(θ)||p(θ)], (2) where q(θ) is the variational posterior. In particular, As shown in Fig. Suppose the number of clusters is C, and the clus- pared with LDA, mATM has a mixture Dirichlet prior with parameters {α c } C c=1 . In other words, mATM assumes that the θ of different documents may come from different clusters, which is the basic thought to discover the data diversity from corpus automatically. In order to infer the parameters in mATM and further develop the EnsLM by mATM, we introduce AEVB for mATM, whose detailed structure is shown in Fig. Although Dirichlet prior of θ is important to learn interpretable topics where the elements in mean vector µ and diagonal covariance matrix Σ are To go further, for inference of mATM, we construct the mLN distribution as which is used to approximate the mixture Dirichlet prior p(θ|{α c , π c } C c=1 ) in mATM. Therefore, for each document, the prior of θ can be written as In practice, we build the µ c and Σ c as where Next, we build variational posterior for latent variables with easy RT function. After collapsing {i d } D d=1 in mATM as (1) in LDA, given topics Φ, for document w, there are two latent variables that need to be inferred: θ and z. LN posterior for θ. We build the variational posterior of θ as LN distribution q , where diag converts a vector to a diagonal matrix, f W θ µ (•) and f W θ σ (•) are two encoding networks, and x is a type of representation for document w such as original words or bag of words (Bow) vector. Morevoer, LN distribution has easy RT function as Normal distribution. Gumbel softmax (GS) posterior for z. As categorical variable, z is difficult to build variational posterior under AEVB with accurate RT function. Instead, we employ GS distribution where τ is the temperature parameter. In order to build encoder for π , we let π = f Wπ (θ, w). For efficient gradient propagation, rather than sampling z from arg max as (7), we obtain the variational posterior of soft assignment vector z Besides the benefit of efficient gradient backpropagation, the soft assignment in (8) provides clustering belonging weights. In the following En-sLM, this property is useful for some ambiguous samples that may belong to different clusters. We obtain the ELBO of mATM as Similarly with that can be learned by maximizing the ELBO in (9). Recently, various advanced LMs for language understanding and generation have been introduced, most of which do not consider the data diversities in the corpus. In this paper, having obtained the clustering assignment vector z from mATM, given a single LM as backbone, we propose the ensemble LM (EnsLM) via z-guided weight modulation. In other words, the EnsLM can modulate the backbone single LM to fit for different clusters. Although LMs have many different types, basically, all of them build on convolutional (such as in CNN where, H 1 ∈ R Ix×Iy×C in and H 1 ∈ R C in are the input features, W ∈ R kx×ky×C in ×Cout and W ∈ R C in ×Cout are the convolutional kernel or full-connected weights where For a document w whose feature at current layer is H 1 , after archiving its domain assignment z ∈ R C×1 from (8), we feed H 1 into the modulated layer as where , and denotes matrix element-wise product (with broadcasting for convolution). Explanation of (12). Intuitively, W and W act as the backbone parameters in the original single LM, and Γ is the modulated parameters, which moves the backbone to fit different domains. If z is drawn from (7) that means z is a one-hot vector, then it denotes that α and β are chosen from the dictionaries A and B, correspondingly. If z is drawn from (8) that means z is a soft assignment vector, then it denotes that α and β are weighted summation of all elements in A and B, correspondingly. In practice, we use the soft assignment vector since i) it brings efficient gradient propagation during joint training of mATM and EnsLM, and ii) it considers the fact that there are some domain ambiguous samples in the dataset. It is interesting to note that although EnsLM is developed for the problem that ground-truth priors of data diversity (such as domain label) is unavailable, it can be also used when we know the priors. For this scenario, rather than inferring the clustering assignment z from mATM via (8), we directly set z as the real one-hot assignment vector, which is illustrated in experiment in Sec. 5.2. Different from some strategies such as data selection that separate the calculation of assignment and the training of LM, our proposed mATM and En-sLM can be jointly trained in one framework. Specifically, given a training set containing N sample {w n } N n=1 , suppose that there is a label {y n } N n=1 for each sample. It should be noted that labels {y n } N n=1 can be different for different tasks, such as labels for document classification, golden summarization for abstractive summarization, or document itself for generation. As a result, the loss for joint training of mATM and EnsLM can be written as where, without loss of generality, L LM denotes the loss for LM. All learnable parameters are i) parameters of mATM: and ii) parameters of LM: Θ LM . These parameters can be jointly trained by stochastic gradient descend with low-variance gradient estimation since LN and GS distributions have easy RT function. In this section, we evaluate the effectiveness and efficiency of our proposed mATM and EnsLM on different NLP tasks including document clusters, text classification, language generation and abstractive document summarization. Our code is available at The basic idea of mATM and EnsLM is that mATM can automatically discover the sample clusters which describe the data diversity. Therefore, we firstly evaluate the document clustering performance of mATM. Datasets Following 20News has 20 classes and consists of 18,846 documents with a vocabulary size of 61,188, partitioned into a training set of 11,314 documents and a test set of 7,532 ones. R8 is a subset of the Reuters 21578 dataset, which has 8 classes and was split into 5,485 training and 2,189 test documents. For these two datasets, we remove the stop words and use the 2,000 most frequent terms as the vocabulary. For all methods, we set the number of clusters as the number of classes. Comparison models and implementation details To verify the effectiveness of mATM for clustering, three types of document clustering models are compared. i) Raw+kmeans performs K-means on raw BoW vectors, and PCA+kmeans uses PCA extract low-dimensional features and then uses K-means for clustering; ii) Train a topic model and then perform Kmeans for clustering on topic proportions, where we consider LDA+kmeans Sentiment classification (positive or negative) for different products is a fundamental language understanding task in NLP. For this task, the data diversity mainly arises from different domains (products) Datasets To evaluate the performance of mATM and EnsLM in capturing the multi-domain property for sentiment classification, following Comparison models and implementation details Following The results of averaged accuracy on all domains are given in Table Comparing results on the first row, we can see that joint training models on all domains outperform separate training on each domain. Compared with BiLSTM-mix, having obtained the GT domain label, DA-MTL, ASP-MTL and MDAE (all of them are developed based on BiLSTM) consider the real domain knowledge in word embedding, feature extractor and attention layers, achieving higher accuracy. Similarly, with GT domain label, three models equipped with our proposed EnsLM performs better than their basic counterparts with a large margin. Assuming that GT domain labels are unavailable, we use mATM to infer the clustering assignment to guide the learning of EnsLM, which obtains the SOTA performance on all three basic models, even better than the models using GT domain label. We attribute it to the fact that com- pared with the hard GT domain label, mATM infers the soft clustering assignment, which not only reflect the domain characteristic of samples but also describe the samples having confused domain characteristics. For example samples from DVD may be similar with the ones from Electronics. Datasets In order to verify the effectiveness of our model on datasets of different lengths, we consider four publicly available corpora: APNEWS, IMDB, BNC, and COCO. Following Lau et al. (2017), we tokenize words and sentences using Stanford CoreNLP Comparison models and implementation details We consider the following baseline models: LSTM, A standard LSTM language model Representive topics Original sentences Generated sentences 1 ['kite', 'flying', 'sky', 'air', 'holding'] ['man A man in a yellow and white outfit flying a kite. A young child flying a kite with a frisbee in the air. A person flying a kite near the water in a body of water. ['cake Two cakes with frosting on top sit on a red plate. A sandwich on a platter with a pickle and some fruit. A cake that has various decorations on it. [ A man on a baseball field swinging a bat. A baseball player swinging a bat on a field. A batter is getting ready to hit the ball. 1024. We set the mini-batch size as 8, the number of training epochs as 5. The clustering number of mATM is set to 64 for the first three datasets, while 80 for COCO dataset. More detailed settings and implementation details can be found in Appendix B.2 Results For fair comparison, we use standard language model perplexity as the evaluation metric. The results of all models on four datasets are given in Table In the first group, Transformer-XL gets better result, which shows that the transformer-based model have better modeling capabilities. In terms of capturing the document global semantic information, the second group can improve performance significantly, which indicates that the topic model is effective in capturing document global information. Pre-training on massive data, the GPT-2 can obtains better results compared with above models. Although GPT-2 gets a good result, the GPT-2-EnsLM-mATM can improve performance significantly by capturing data diversity. It illustrates that even pre-training on large scale of corpus, En-sLM can further improve the performance of pretrained LM via exploring data diversity. A similar phenomenon also appeared in the experiments conducted by In this paper, we first propose mATM to infer latent semantic clusters from raw text corpus, and then combine it with LM with efficient weight modulation, resulting in a more powerful EnsLM, which can be naturally extended to other LMs. In the future, we will study the effectiveness of EnsLM on other NLP tasks, such as the multi domain translation, and investigate whether EnsLM can be applied to the pre-training stage of Transformer. can be fround in the release code of CNN/DM CNN/DM consists of news and associated sentence highlights, that is a brief overview composed of a few sentences. Following the standard training/validation/testing splits in Hermann et al. ( XSum XSum includes 226, 711 news articles, each of which is associated with a one-sentence summary. We use the standard training/validation/testing splits Note that we remove stop words to obtain the bagof-word (BOW) vector for each document, and then use the BOW vectors to infer the mATM model. CNN/BiLSTM-EnSLM-mATM: To reduce both computation and storage costs, we introduce a learnable key vector as W (t) , which can be combined with mATM by efficient weight modulation, leading to a CNN/BiLSTM-EnSLM-mATM. More specifically, we adopt 1-layer CNN/BiLSTMCNN with the channel/hidden size of 150 in CNN/BiLSTM-EnSLM-mATM equipped with 300-dimensional word embedding vecotrs. For optimization, the Adam optimizer is utilized here (Kingma and Ba, 2014) with a learning rate of 0.001. To avoid overfitting, we utilize the dropout and set its rate as 0.5. We set the size of minibatch as 50 in all experiments. Bert-EnsLM-mATM: As a transformer-based model, the main component of Bert is query, key and value layer. And these component as MLP layer, we can combine Bert with mATM by efficient weight modulation easily. Specially, to re- with size 5, and tuned the α for the length penalty between 0.6 and 1 on validation set. It is worth noting that our decoder applies neither a copy nor a coverage mechanism, despite their popularity in abstractive summarization. As shown in Fig. | 1,001 | 810 | 1,001 |
Temporal Scoping of Relational Facts based on Wikipedia Data | extraction from text has focused on named-entity recognition, entity linking, and relation extraction. Less attention has been paid given to extracting the temporal scope for relations between named entities; for example, the relation president-Of(John F. Kennedy, USA) is true only in the time-frame (January 20, 1961 -November 22, 1963). In this paper we present a system for temporal scoping of relational facts, which is trained on distant supervision based on the largest semi-structured resource available: Wikipedia. The system employs language models consisting of patterns automatically bootstrapped from Wikipedia sentences that contain the main entity of a page and slot-fillers extracted from the corresponding infoboxes. This proposed system achieves state-of-the-art results on 6 out of 7 relations on the benchmark Text Analysis Conference 2013 dataset for temporal slot filling (TSF), and outperforms the next best system in the TAC 2013 evaluation by more than 10 points. | Previous work on relation extraction In this paper, we describe TSRF, a system for temporal scoping of relational facts. For every relation type, TSRF uses distant supervision from Wikipedia infobox tuples to learn a language model consisting of patterns of entity types, categories, and word n-grams. Then it uses this trained relation-specific language model to extract the top k sentences that support the given relation between the query entity and the slot filler. In a second stage, TSRF performs timestamp classification by employing models which learn "Start", "End" and "In" predictors of entities in a relationship; it computes the best 4-tuple timestamp [T1, T2, T3, T4] based on the confidence values associated to the top sentences extracted. Following the TAC-TSF task for 2013, TSRF is trained and evaluated for seven relation types, as shown in Table The remainder of the paper is organized as follows: The next section describes related work. Section 3 introduces the TAC-TSF input and output formats. Section 4 discusses the main challenges, and Section 5 details our method for temporal scoping of relations. Section 6 describes our experiments and results, and it is followed by concluding remarks. | To our knowledge, there are only a small number of systems that have tackled the temporal scoping of relations task. YAGO The TempEval task The current state-of-the-art systems for TSF have been the RPI-Blender system by 3 The Temporal Slot Filling Task The input format for a TSF system as instantiated for the relation per:spouse(Brad Pitt, Jennifer Aniston) is shown in Table Similar to the regular slot filling task in TAC, the TSF output includes the offsets for at least one entity mention and up to two temporal mentions used for the extraction and normalization of hypothesized answer. For instance, assume that a system extracts the relative timestamp "Monday" and normalizes it to "2010-10-04" for the relation org:top employee(Twitter, Williams) using the document date from the following document: The system must report the offsets for both "Monday" in the text body and "2010-10-04" in the DATETIME block for the justification. The TAC-TSF task uses the following representation for the temporal information extracted: For each relation provided in the input, TSF systems must produce a 4-tuple of dates: [T1, T2, T3, T4], which indicates that the relation is true for a period beginning at some point in time between T1 and T2 and ending at some time between T3 and T4. By convention, a hyphen in one of the positions implies a lack of a constraint. Thus, We discuss here some of the main challenges encountered in building a temporal scoping system. Annotation of data for this task is expensive, as the human annotators must have extensive background knowledge and need to analyze the evidence in text and reliable knowledge resources. As per Sometimes temporal knowledge is not stated explicitly in terms of dates or timestamps. For example, from the text "they got married on Valentine's Day" a system can extract Valentine's Day as the surface form of the start of the per:spouse relation. However, for a temporal scoping system it needs to normalize the temporal string to the date of February 14 and the year to which the document refers to explicitly in text or implicitly, such as the year in which the document was published. A relation can be specified in text by employing numerous syntactic and lexical constructions; e.g. for the per:spouse relation the patterns "got married on [DATE]" and "vowed to spend eternity on [DATE]" have the same meaning. Additionally, entities can appear mentioned in text in various forms, different from the canonical form given as input. For instance, Figure A temporal scoping system also needs to learn the inter-dependence of relations, and how one event affects another. For instance, in our automatically generated training data, we learn that a death event specified by n-grams like "was assassinated" affects the per:title relation, and it indicates that the relationship ended at that point. In Figure A temporal scoping system should also be able to model the trustworthiness of text patterns, and even the evolution of patterns that indicate a relationship over time. For example, in current news, the birth of a child does not imply that a couple is married, although it does carry a strong signal about the marriage relationship. 5 Learning to Attach Temporal Scope As outlined in Section 4, one of the biggest challenges of a temporal scoping system is the lack of annotated data to create a strong information extraction system. Previous work on relation extraction such as • It allows building classifiers with a large number of features; • The supervision is provided intrinsically by the detailed user-contributed knowledge; • There is no need to expand patterns iteratively. Mintz et al. also point out that similar to unsupervised systems, distant supervision also allows: • Using large amounts of unlabeled data such as the Web and social media; • Employing techniques that are not sensitive to the genre of training data. We follow the same premise as Hence, in our first step, we build an automatic system which takes as input a binary relation between two entities e.g. per:spouse(Brad Pitt, Jennifer Aniston) and a number of documents. The system needs to extract highly ranked/relevant sentences, which indicate that the two entities are in the targeted relationship. The next component takes as input the top k sentences generated in the previous step and extracts temporal labels for the input relation. Note that our target is to develop algorithms that are not relation-specific but rather can work well for a multitude of relations. We elaborate on these two system components further. Wikipedia is the largest freely available encyclopedic collection, which is built and organized as a user-contributed knowledge base (KB) of entities. The current version of the English Wikipedia contains information about 4.2 million entities. In addition to the plain text about these entities, Wikipedia also contains structured components. One of these is the infobox. Infoboxes contain information about a large number of relations for the target entity of the Wikipedia page, e.g. names of spouses, birth and death dates, residence etc.. Similar to structured databases, the infoboxes contain the most important/useful relations in which entities take part, while the text of Wikipedia pages contains mentions and descriptions of these relations. Because of this, Wikipedia can be seen as a knowledge repository that contains parallel structured and unstructured information about entities, and therefore, can be employed more easily than Freebase or other structured databases for building a relation extraction system. Figure For every relation, we extract slot-filler names from infoboxes of each Wikipedia article. We also leverage Wikipedia's rich interlinking model to automatically retrieve labeled entity mentions in text. Because the format of the text values provided by different users for the infobox attributes can vary greatly, we rely on regular expressions to extract slot-filler names from the infoboxes. For every relation targeted, we build a large set of regular expressions to extract entity names and filter out noise e.g. html tags, redundant text etc.. To extract all occurrences of named-entities in the Wikipedia text, we relabel each Wikipedia article with Wikipedia interlinks by employing the entity linking (EL) system by As stated in Section 4, temporal information in text is specified in various forms. To resolve temporal mentions, we use the Stanford SUTime (Chang and Manning, 2012) temporal tagger. The system exhibits strong performance outperforming state-of-the-art systems like HeidelTime After running the Stanford SUTime, which automatically converts date expressions to their normalized form, we collect sets of contiguous sentences from the page that contain one mention of the targeted entity and one mention of the slotfiller, as extracted by the entity linking system. We then build a large language model by bootstrapping textual patterns supporting the relations, sim-ilar to For assigning sentences a relevance score with respect to a targeted relation, we represent the sentences in an input document (i.e., Wikipedia page) as d dimensional feature vectors, which incorporate statistics about how relevant sentences are to the relation between a query entity q and the slot filler z. For example, for the per:spouse relation, one binary feature is "does the input sentence contain the n-gram "QUERY ENTITY got married"". Note that the various surface forms/mentions of q and z are resolved to their canonical target at this stage. We were able to extract 61,872 tuples of query entity and slot filler relations from Wikipedia for the per:spouse relation. Figure On Our language model consists of n-grams (n ≤ 5) like "SLOT FILLER and QUERY ENTITY were married", "SLOT FILLER filed for divorce from" which provides clues for the marriage relation. These n-grams are then used as features with an implementation of a gradient boosted decision trees classifier similar to that described by In April 2005, Cruise began dating actress Katie Holmes. On April 27 that year, Cruise and Holmesdubbed "TomKat" by the media -made their first public appearance together in Rome. On October 6, 2005, Cruise and Holmes announced they were expecting a child, and their daughter, Suri, was born in April 2006. On November 18, 2006, Holmes and Cruise were married in Bracciano, Italy, in a Scientology ceremony attended by many Hollywood stars. There has been widespread speculation that the marriage was arranged by the Church of Scientology. On June 29, 2012, it was announced that Holmes had filed for divorce from Cruise after five and a half years of marriage. On July 9, 2012, it was announced that the couple had signed a divorce settlement worked out by their lawyers. Our objective is to rank the sentences in a document based on the premise that entities q and z are in the targeted relation r. We tackle this ranking task by using gradient boosted decision trees (GBDT) to learn temporal scope for entity relations. Previous work such as We employ the stochastic version of GBDT similar to Figure On the unseen test data, we apply our trained model and obtain a score for each new sentence s that contains mentions of entities q and z that are in a targeted relationship by turning s into a feature vector as shown previously. Among all sentences that contain mentions of q and z, we choose the top k with the highest score. The value of k was tuned based on the performance of TSRF on our development set. To predict timestamps for each relation, we build another classifier, DATECL similar to that described in the previous section, by using language models for "Start", "End" and "In" predictors of relationship. The "Start" model predicts T1, T2; "End" predicts T3, T4 and "In" predicts T2, T3. Raw Trigger Features: Similar to previous work by External Event Triggers: Our system also considers the presence of other events as triggers e.g. a "death" event signaled by "SLOT FILLER died" might imply that a relationship ended on that timestamp. Similarly, a "birth" event can imply that an entity started living in a particular location e.g. the per:born-In(Obama, Honolulu) relation from the sentence "President Obama was born in At each step, TSRF extracts the top timestamps for predicting "Start", "End" and "In" based on the confidence values of DATECL. Similar to previous work by Step 1: Step 2: Iterate through the classified timestamps Step 3: For a new T aggregate : For evaluation, we train our system on the infobox tuples and sentences extracted from the Wikipedia dump of May 2013. We set aside a portion of the dump as our development data. We chose to use the top-relevant n-grams based on the performance on the development data as features. We employ then the TAC evaluation data, which is publicly available through LDC. We utilize the evaluation metric developed for TAC TAC sets the constant c to one year, so that predictions that differ from the gold standard by one year get 50% credit. The absence of a constraint in T1 or T3 is treated as a value of -∞ and the absence of a constraint in T2 or T4 is treated as +∞, which lead to zero-value terms in the scoring sum. Therefore, the overall achievable score has a range between 0 and 1. We compare TSRF against four other TSF systems: (i) RPI-Blender We also compare our system with a reasonable baseline similar to Table TSRF achieves approximately 48% of human performance (LDC) and outperforms all other sys- It is also important to note that our system exhibits a balanced performance on the relations on which it was tested. As shown in column StDev in Table The paper described an automatic temporal scoping system that requires no manual labeling effort. The system uses distant supervision from Wikipedia to obtain a large training set of tuples for training. It uses a novel two-step classification to remove the noise introduced by the distant supervision training. The same algorithm was employed for multiple relations and exhibited similarly high accuracy. Experimentally, the system outperforms by a large margin several other systems that address this relatively less explored problem. Future directions of development include extracting joint slot filler names and temporal information, and leveraging the changes observed over time in Wikipedia for a query entity and a slot filler in a target relation. | 989 | 1,218 | 989 |
SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects | Despite the progress in building multilingual language models, evaluation is often limited to a few languages with available datasets which excludes a large number of low-resource languages. In this paper, we create SIB-200a large-scale open-sourced benchmark dataset for topic classification in 205 languages and dialects to address the lack of evaluation dataset for Natural Language Understanding (NLU). For many of the languages covered in SIB-200, this is the first publicly available evaluation dataset for NLU. The dataset is based on Flores-200 machine translation corpus. We annotated the English portion of the dataset and extended the sentence-level annotation to the remaining 204 languages covered in the corpus. Despite the simplicity of this task, our evaluation in full-supervised setting, cross-lingual transfer setting and prompting of large language model setting show that there is still a large gap between the performance of high-resource and low-resource languages when multilingual evaluation is scaled to numerous world languages. We found that languages unseen during the pretraining of multilingual language models, languages from under-represented families (like Nilotic and Altantic-Congo), and languages from the regions of Africa, Americas, Oceania and South East Asia, often have the lowest performance on our topic classification dataset. We hope our dataset encourage a more inclusive evaluation of multilingual language models on a more diverse set of languages. 1 | In the last few years, developing massively multilingual Pre-trained Language Models (PLMs) to scale to several written languages is an active area of research-e.g. covering 100 languages While there is evidence from previous works that languages not covered during pre-training often lead to lower performance, such analysis is also limited to a small selection of languages with annotated datasets Recently, there is a push to scale evaluation datasets to more than 100 languages, but this requires a very expensive annotation effort in terms of money and time. Often, this scaling is only carried out by a large community effort that spans many years like the Universal Dependency (UD) project The largest benchmark datasets that are available for NLU are UD, Taxi1500 In this paper, we create SIB-200-a large-scale open-sourced benchmark dataset for topic classification to address the lack of evaluation datasets for NLU. The dataset is based on Flores-200 Our evaluation shows that there is still a large gap between the performance of high-resource and low-resource languages when multilingual evaluation is scaled to numerous world languages. Languages unseen during the pre-training of multilingual PLMs, languages from under-represented families (like Nilotic and Altantic-Congo), and languages from the regions of Africa, Americas, Oceania and South East Asia, often have the lowest performance on our dataset. We also find that simply scaling up the number of languages without scaling up the domains in the pre-training is unhelpful (e.g., Glot-500 pre-trained on 500 languages largely under-performs XLM-R pre-trained on 100 languages). It is crucial to mix text from various domains. For languages unseen during pre-training, we show the potential of multilingual language adaptive fine-tuning (MAFT) 2 Finally, we extend our evaluation to the zeroshot settings by training individually on English, French, Arabic and Chinese (Simplified) languages using XLM-R | We introduce SIB-200-a Simple Inclusive and Big topic classification dataset for over 200 languages and dialects. We leveraged the multi-way parallel Flores-200 dataset annotate (2,009 rather than 562 instances We recruited four annotators who are native speakers of English to label 2,009 sentences obtained from the DEV and DEVTEST sets of Flores-200 We report Fleiss Kappa score Choosing the final label per sentence We assigned the final label to a sentence by majority voting. Specifically, we assign a label to a sentence if at least two annotators agree on the category, but we excluded the situation, where any two annotators conflicted with the other two annotators. For example, for the sentence "The major organ of the circulatory system is the heart, which pumps the blood.", the first two annotators assigned "science" while the last two assigned "health". In total, we assigned a single label to 1,695 sentences, but there were 314 sentences with conflicts in the annotation. We asked the lead annotator to adjudicate the sentences with conflicting annotations and assigned a single label to each sentence. We later combined the fixed conflicting annotations with the others to give us back a total of 2009 annotated sentences. For the final dataset, we excluded sentences with the label of "uncategorized", we only selected label categories with more than 80 sentences, this removed categories such as "business" (80 sentences), "disasters" (73 sentences), "crime" (72 sentences), "education" (52 sentences), and "religion" (46 sentences). We note that having too many categories with few sentences makes building text classification models a bit difficult leading to a lower performance. Also, we combined "science" (138 sentences) and "technology" (114 sentences) category into a single category of "science/technology". Finally, we removed the "nature" category because there is a lot of conflict with "science" and "geography" categories. Our preliminary experiments show that adding "nature" significantly lowers the performance of our classifier. About half of the Flores-200 is part of the SIB-200 dataset (i.e. 1004 out of 2009 sentences). Table Here, we describe different categorization of languages, text classification models developed for SIB-200 , and the experimental settings (i.e. full supervised setting and zero-shot transfer setting). Table Categorization by availability in PLM Lastly, we grouped languages and language families by their inclusion in the training of multilingual PLMs. XLM-R We trained a simple Multilayer Perceptron (MLP), fine-tuned multilingual PLMs and prompted large language models for text classification. Multi-Layer Perceptron For the input features, we make use of either n-gram features (n=1 up to 3 in our experiments) or XLM-R tokens obtained by first tokenizing the sentences using XLM-R tokenizer. We make use of the default setting on scikit-learn tool MAFT with fewer data and synthetic data We explore how to improve over regional PLMs using MAFT-adaptation of an existing multilingual PLM to multiple or new set of languages simultaneously, this was effective for adapting XLM-R to 20 languages spoken in Africa To extend to more languages, we apply MAFT to 61 African languages with at least 10MB of monolingual data (AfroXLMR-61). To further extend to more languages with less than 10MB of data, we generate machine-translated data using NLLB for 34 African languages (including 18 in AfroXLMR- Table Cross-lingual transfer is based on the XLM-R model as it is the best-performing PLM. Performances from 4 source languages: English, French, Chinese and Arabic are reported. ). We refer to the resulting model after adaptation as AfroXLMR-76. We provide more details on the pre-training corpus in Appendix B. Large Language Models Lastly, we also report results by prompting two popular large language models: GPT-3.5-Turbo (gpt-3.5-turbo-0613) and . Compared with smaller language models from MLM and MAFT, they feature strong instruction-following capabilities without task-specific fine-tuning. Fully-supervised In this setting, we trained on each language in SIB-200 and evaluated on the same language. We did this evaluation for 205 languages and compared the performance of different text classification models. The MLP models were trained for 300 iterations, and we used either word ngram tokens or XLM-R tokens. For the multilingual PLM, we fine-tune each language training data for 20 epochs, with a maximum sequence length of 164, batch size of 16, and learning rate of 1e -5 on a single Nvidia A10 GPU. Here, we assume access to labelled data in the target language. Cross-lingual transfer For this setting, we finetune XLM-R on a language in Joshi's class 5 (we call it a "source" language), and evaluate on other languages. For this setting, we fine-tune XLM-R on a language in Joshi's class 5 (we call it a "source" language), and evaluate on other languages. We trained in four languages with three different scripts i.e. English, French, Arabic and Chinese (Simplified). Here, we assume access to labelled data in a few high-resource languages. Zero-shot prompt We prompt GPT-3.5/4 for text classification for the 205 languages using an English template. We make use of a simple template from Sanh et al. ( In order to demonstrate the effectiveness of our data set for multilingual evaluation, we benchmark the performance across various models and group the results by categorizations (Table Comparing English versus other languages, finetuning XLM-R on English achieved an accuracy of 92.1%, indicating that the task itself is not difficult if given a properly pre-trained MLM and ∼ 700 training samples. However, when fine-tuning the same model in other languages, the performance drops vastly to an average accuracy of 75.9%. Similarly, in the cross-lingual transfer and zero-shot prompt scenarios, the performance further drops. The distribution of accuracy scores is imbalanced across language families. Atlantic-Congo, Nilotic, Mande, Aymaran and Quechuan languages have the lowest accuracy scores. Even under the fully supervised scenario, the best-performed model reaches <65% accuracy scores on these languages. There also tends to be a larger performance gap between fullysupervised and cross-lingual transfer scenarios, suggesting a poor semantic alignment Performances across models In the fully supervised scenario, XLM-R performs the best on 16 out of the 22 language families. Among the remaining 6 language families, applying the simplest MLP classifier with n-gram input features outperforms more complex transformer-based MLMs (Glot-500 and XLM-R), suggesting they are not well adapted to these 6 language families. Glot-500, despite being pre-trained with many more languages, outperforms XLM-R only on Sino-Tibetan languages. Even on Sino-Tibetan languages, it fails to outperform the simplest MLP baseline. Cross-lingual transfer results are similar when using different source languages. On most language families, the results are comparable to fully supervised ones. Zero-shot prompting leads to a big drop due to the lack of supervised samples. The performance is good only for a few language families such as Indo-European, Uralic, Japonic and Koreanic. In order to determine the critical factor in this multilingual classification task, we conducted in-depth case studies on the model architecture choices and language categorizations. Effect of language coverage in pre-training Figure Effect of pre-training corpora size Figure Fine-tune vs. Prompted Out of all the 205 languages, GPT-4 outperforms GPT-3.5-turbo in 157 languages. Only on Buginese, Kabiyè, Mizo, Nuer and Ayacucho Quechua, GPT-3.5-Turbo outperforms GPT-4 for > 10%. However, zero-shot prompting consistently underperforms fine-tuned methods. It is hard to include extensive descriptions of the classification criteria in the prompt. Adding more examples to the prompt might improve the performance. Cross-Lingual transfer vs Fully supervised Here, we compare the performance between crosslingual transfer and fully-supervised methods. We observe that all languages that are included in the pre-training corpus of XLM-R, the cross-lingual transfer performs similarly to fully supervised methods. The best source language for crosslingual transfer is, surprisingly, French, rather than English, which has the largest amount of pretraining corpus, though the difference among various source languages is tiny. This suggests languages included in the XLM-R pre-training corpus are pretty well aligned with all the four chosen high-resource languages. The advantage of fully supervised methods over cross-lingual transfer becomes prominent mainly when the target language is not included in the pre-training corpus of XLM- 11 We define "preferred written scripts" as the writing system or script that individuals or communities predominantly choose or favor when expressing written language. R and its script is included. In this case, fully supervised methods can improve the performance by fine-tuning the model on the target languages, but cross-lingual transfer fails to capture the alignment with high-resource languages. Figure Evaluation of region-specific PLMs While our evaluation is primarily focused on multilingual PLMs trained on 100 languages or more, models pre-trained on a group of linguistically or geographically related languages often lead to better performance as observed for Indian languages (Table In this paper, we created SIB-200-a large scale open-sourced benchmark dataset for topic classification in 200 languages and dialects to address the lack of evaluation datasets for natural language understanding especially for low-resource languages. We performed extensive evaluation across fullsupervised setting, cross-lingual transfer setting and prompting of LLMs settings. Furthermore, we grouped the 200 languages in different categories based on language families, geographical regions, Joshi's class and coverage in multilingual pre-trained language models to provide insights into which group of languages have poor performance on this simple and inclusive benchmark. Data size One of the limitations of our work is the size of the benchmark data which is 1,004. Having more instances would be better. However, we believe this is an important contribution for many languages that often do not have dataset (e.g. news articles or Wikipedia articles) that can be used for topic classification annotation. Translationese effect One of the main limitation of our work is that the labelled dataset created for other non-English languages are based on human translation and may suffer from translationese effect including a slight drop in performance. Few PLMs evaluated Another limitation is the choice of multilingual pre-trained language models, we note that XLM-R may not be the best multilingual encoder out there, there are other publicly available ones like InfoXLM Categorization by availability in PLM Lastly, we grouped languages and language families by their inclusion in the training of multilingual PLMs. XLM-R Table We explore how to improve over regional PLMs using MAFT-adaptation of an existing multilingual PLM to multiple or new set of languages simultaneously, this was effective for adapting XLM-R to 20 languages spoken in Africa To further extend to more languages with less than 10MB of data, we generate machinetranslated data using NLLB for 34 African languages (including 18 in AfroXLMR-61). The selected 34 languages are the ones with less than 10MB or only have MT560 (religious domain). We make use of the English news commentary dataset 14 14 we used version 16 of the data released for WMT. By fine-tuning XLM-R SIB-200 with 14 labels, we achieved accuracy score of 82.3% while for the 7 labels, we reached the performance of 92.5%. Table Figure Vocabulary augmentation to address the noncoverage of some African scripts like Nkoo and Tfng, we perform vocabulary augumentation of the original XLM-R tokenizer. We follow these steps: (1) we train a tokenizer on a combined multilingual texts for N'ko, Tamasheq (Tifinagh) and Tamazight languages using sentencepiece, and vocabulary size of 30K. ( (3) We performed MAFT on XLMR. The resulting model is called AfroXLMR-76-script. As an additional experiment, we repeated the vocabulary augmentation and MAFT approach for XLM-R-base model, resulting into AfroXLMR-base-76-script. Results categorized by script Our results in Table Overall African languages evaluation Table | 1,499 | 1,975 | 1,499 |
Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation | Machine translation (MT) has benefited from using synthetic training data originating from translating monolingual corpora, a technique known as backtranslation. Combining backtranslated data from different sources has led to better results than when using such data in isolation. In this work we analyse the impact that data translated with rule-based, phrasebased statistical and neural MT systems has on new MT systems. We use a real-world low-resource use-case (Basque-to-Spanish in the clinical domain) as well as a high-resource language pair (German-to-English) to test different scenarios with backtranslation and employ data selection to optimise the synthetic corpora. We exploit different data selection strategies in order to reduce the amount of data used, while at the same time maintaining highquality MT systems. We further tune the data selection method by taking into account the quality of the MT systems used for backtranslation and lexical diversity of the resulting corpora. Our experiments show that incorporating backtranslated data from different sources can be beneficial, and that availing of data selection can yield improved performance. | The use of supplementary backtranslated text has led to improved results in several tasks such as automatic post-editing (Junczys-Dowmunt and While In this work we conduct a systematic study of the effects of backtranslated data from different sources, as well as how to optimally select subsets of this data taking into account the loss in quality and lexical richness when data is translated with different MT systems. That is, we aim to (i) provide a systematic analysis of backtranslated data from different sources; and (ii) to exploit a reduction in the amount of training data while maintaining high translation quality. To achieve these objectives we analyse backtranslated data from several MT systems and investigate multiple approaches to data selection for backtranslated data based on the Feature Decay Algorithms (FDA: Nowadays, We conduct our analysis in the scope of the EU-ES translation of EHR use-case, as well as on a language pair and a data set that have been well studied in the literature -German to English (DE-EN) data used in the WMT Biomedical Translation Shared Task | One of the first papers comparing the performance of different systems for backtranslation was More recently In this work we extend these ideas by combining backtranslated data from RBMT, PB-SMT, NMT (LSTM) and While the most common approach to assessing the translation capabilities of a MT system is via evaluation scores such as BLEU FDA where length(s) is the number of words in the sentence s and C L (ngr) is the number of occurrences of the n-gram ngr in L. The score is then used to rank sentences, with the one with the highest score being selected and added to L. This process is repeated iteratively. To avoid selecting sentences containing the same n-grams, score(s, S seed , L) applies a penalty to the n-grams (up to order three in the default configuration) proportional to the occurrences that have been already selected. In (1), the term 0.5 C L (ngr) is used as the penalty. In the context of MT, FDA has been shown to obtain better results than other methods for data selection Related work on quality and lexical diversity and richness of MT demonstrates that (i) regardless of the overall performance of an MT system (as measured by both automatic and human evaluation), in general machine-translated text is error-prone and cannot reach human quality In its operation, FDA compares two types of text -the seed and the candidate sentences -without taking into account the quality or the lexical diversity/richness of the candidate text. Our hypothesis is that when selecting data from different sources, FDA cannot account for the differences in quality and lexical diversity/richness of these texts, with the consequence that the selected set (L) is sub-optimal. We test our hypothesis by assessing the quality and lexical diversity/richness of the backtranslated data with the four different systems as well as with different selected subsets of training data. To tackle the problem of sub-optimal FDAselected datasets, we propose to rescore FDA scores based on quality evaluation and lexical diversity/richness scores. 2 That is, for each sentence 2 We talk about "rescoring" as if we compare equations ( where φ is a function over quality and lexical diversity metrics producing a non-negative real number. We note three considerations with respect to our approach to Equation (2). 1. Sentence-level selection versus documentlevel quality and lexical diversity/richness evaluation. The FDA algorithm works on a sentence level, while our approach rescores the FDA scores using document-level metrics. As our goal is to differentiate between the output of different MT systems, we consider metrics that reflect the overall quality of each system. Furthermore, metrics for lexical diversity/richness as type/token ratio (TTR) We decided on this rescoring formula based on preliminary experiments, as it led to the selection of more sentence pairs originating from models trained with backtranslated data from the system that performs best (for both ES-EU and EN-DE); we chose MTLD based on the findings of FDA for MT, we use a devset as the seed. In our method we compute BLEU and TER on the devset also used as a seed; MTLD is computed on the backtranslated text, i.e. the synthetic source text. As a challenging low-resource scenario, we chose the translation of clinical texts from Basque to Spanish, for which there is no in-domain bilingual corpora. We make use of available EHRs in Spanish coming from the hospital of Galdakao-Usansolo to create a synthetic parallel corpus via backtranslation. The Galdakao-Usansolo EHR corpus consists of 142,154 documents compiled between 2008 and 2012. After deduplication, we end up with a total of 2, 023, 811 sentences. In order to adapt the systems to the clinical domain, we used a bilingual dictionary previously used for automatic clinical term generation in Basque (Perez-de-Viñaspre, 2017), consisting of 151,111 terms in Basque corresponding to 83,360 unique terms in Spanish. To evaluate our EU-ES systems, we use EHR templates in Basque written with academic purposes (Joanes Etxeberri Saria V. Edizioa, 2014) together with their manual translations into Spanish produced by a bilingual doctor. These 42 templates correspond to diverse specializations, and were written by doctors of the Donostia Hospital. After deduplication, we obtain 1,648 sentence pairs that are randomly divided into 824 sentence pairs for validation (devset) and 824 for testing. In order to test the generalisability of our idea, we use a well-researched language pair, German-to-English. As our out-of-domain corpus, we used the DE-EN parallel data provided in the WMT 2015 The adaptation of systems to the medical domain with backtranslated data is performed using the UFAL data collection. Via a set of experiments, we (i) investigate the differences in the backtranslated data originating from the four different MT systems and their impact on the performance of MT systems using this backtranslated data, and (ii) test our hypothesis as well as different approaches to rescoring the data selection algorithm. First, we train PB-SMT, LSTM and Transformer models for the ES-EU and EN-DE (i.e. reverse) language directions. Then we backtranslate the monolingual corpus into the target language (EU and DE, respectively) using those systems, as well as a RBMT one. RBMT: We use Apertium We train all NMT systems using Open-NMT We use Moses scripts to tokenise and truecase all the corpora to be used for statistical or neural systems. For the NMT systems, we apply BPE For each language pair we train four Transformer models with the authentic and backtranslated data, as well as a fifth system with all four backtranslated versions concatenated to the authentic data. These we refer to as +S bt , where S is one of RBMT, PB-SMT, LSTM or Transformer and indicates the origin of the backtranslation, and +All bt to refer to the system trained with all backtranslated data. Next, we use the devset as a seed for the data selection algorithm. Given that FDA does not score sentences that have no n-gram overlaps with any sentence from the seed, for the 'EachFromAll' configuration presented later, which is constrained to select one sentence for each sentence in the monolingual corpus, we randomly select one sentence among those produced by the 4 different systems used for backtranslation, in case none of them overlap with any sentence from the seed. We obtain the FDA scores and use them to order the sentence pairs in descending order. Next, we apply the following different data selection configurations: 1. Top from all sentences (referred to as FromAll henceforth): concatenate the data backtranslated with all the systems and select the top ranking 2M (for EU-ES) or 2.3M (for DE-EN) sentence pairs with the possibility of selecting the same target sentence more than once, i.e. translated by different systems. 2. Top for each (target) sentence (henceforth, Each-FromAll): concatenate the data backtranslated with all the systems and select the optimal sentence pairs avoiding the selection of the same target sentence more than once. That is, each selected target sentence will have only one associated source sentence originating from one specific system. 3. Top for each (target) sentence x4 (henceforth, EachFromAll x4): same as EachFromAll, but repeating the selected backtranslated data four times (only for EU-ES). 4. Top for each (target) sentence rescored (henceforth, EachFromAll RS): use MT evaluation and lexical diversity metrics to rescore the FDA ranks and perform an EachFromAll selection. We selected the Transformer architecture as the basis of our backtranslation models because (i) it has obtained the best performance for many usecases and language pairs which we also aim at, and (ii) it has been shown that Transformer's performance is strongly impacted by the quantity of data, which can act as an indicator as to whether our improvements originate from the quantity or the quality of the data. That is why we compare EachFromAll systems to systems trained with all backtranslated data (i.e. all 8M sentence pairs), to verify that it is not only the amount of data that impacts performance. We use the automatic evaluation metrics BLEU, TER, METEOR and chrF (in its chrF3 variant) to assess the translation quality of our systems. In Table For EU-ES, all systems trained with 2M sentence pairs selected from the backtranslated data according to the basic DS methods and the newly proposed method with rescoring obtain better results than any system trained with backtranslated data originating from a single system. Furthermore, according to all metrics except BLEU, the Each-FromAll system outperforms FromAll. Compared to the system including the data translated by all systems (+All bt ), EachFromAll is better only in terms of TER. These results show that either the quantity of data leads to differences in performance (comparing the best system after data selection, i.e. EachFromAll, to +All bt ), or that the data selection method fails to retrieve those sentence pairs that would lead to better performance. In order to test these two assumptions, we first train a system with the EachFromAll data repeated 4 times resulting in the same number of sentence pairs as in the +All bt case. According to the resulting evaluation scores, this system is worse than +All bt , but also worse than any of the basic data selection configurations. This indicates that the diversity (among the source sentences) gained by using 4 different systems for backtranslation is more important than the quantity of the data in terms of automatic scores. While for EU-ES the EachFromAll selection configuration achieves the best results, for DE-EN the FromAll configuration leads to better scores. Furthermore, this configuration outperforms the system with all backtranslated data (+All bt ). Next, we train a system with data selected from the backtranslated data after the original FDA scores have been rescored using the quality and lexical diversity/richness scores. These systems are shown in Table We analyse the lexical diversity/richness of the corpora of both language pairs based on the Yule's I, MTLD and TTR metrics. We calculate these scores for the corpora resulting from backtranslation by the different systems (BT), for the corpora resulting from applying the basic data selection approaches (DS), and the development and test sets used for evaluation (EV). We show these scores in Table Regarding the different systems used for backtranslation, we observe that for EU-ES the sentences translated by the RBMT system are much more diverse than the rest according to all metrics, while Transformer obtains the highest scores among the other three. For the DE-EN corpora, this is not the case, and the data from the Transformer system is more diverse according to Yule's I and TTR, but not according to MTLD. We note that Yule's I and TTR depend on the amount of sentences in the assessed corpora. As such, we can see that for the development and test sets the scores are quite a bit higher than the rest. Accordingly, comparisons should be only be conducted for corpora with the same number of sentences. Following the analysis and discussion in We first analyse how the basic data selection methods choose different numbers of sentences from each system used for backtranslation, and then we compare them with the rescoring method. Figures For EU-ES, we observe that the EachFromAll configuration (the one with the highest scores according to the evaluation metrics in Table For a more in-depth view of the distribution of selected sentence pairs per backtranslation system, we present the amount of selected sentences per system in bins of 100,000 for the FromAll systems. We show the results for EU-ES in Figure We also analyse how the average sentence length varies during the data selection process in the Fro-mAll configuration, as we did in Section 6.3 when analysing the selected systems. Table We evaluated several approaches to data selection over the data backtranslated by RBMT, PB-SMT, LSTM and Transformer systems for two language pairs (EU-ES and DE-EN) from the clinical/biomedical domain. The former is a lowresource language pair, and the latter a well researched, high-resource language pair. Furthermore, in terms of the two target languages, English is a morphologically less rich language than Spanish, which creates a different setting again in which to evaluate our methodology. We use these two different use-cases to better understand both data selection and backtranslation. We show how the different FDA data selection configurations tend to select different numbers of sentences coming different systems, resulting in MT systems with different performance. Under the assumption that FDA's performance is hindered by the fact that the data originates from MT systems, and as such contains errors and is of lower lexical richness, we rescored the data selection scores for each sentence by a factor depending on the BLEU, TER and MTLD values of the system used to backtranslate it. By doing so, we managed to improve the results for the DE-EN system, while for EU-ES we obtained similar performance to the other MT systems; this allows us to use just 25% of the data. Further investigation is required to study under which conditions our proposed rescoring method is beneficial, but our experiments with both low-and high-resource language pairs suggest that if the systems used for backtranslation are poor, then this technique will be of little value; clearly this is closely related to the amount of resources available for the language pair under study. In the future, we plan to investigate ways to directly incorporate the rescoring metrics into the data selection process itself, so that penalising similar sentences can also be taken into account. We also aim to conduct a human evaluation of the translated sentences in order to obtain a better understanding of the effects of data selection and backtranslation on the overall quality. Finally, we intend to analyse the effect of these measures in a wider range of language pairs and settings, in order to propose a more general solution. | 1,166 | 1,095 | 1,166 |
HAT: Hardware-Aware Transformers for Efficient Natural Language Processing | Transformers are ubiquitous in Natural Language Processing (NLP) tasks, but they are difficult to be deployed on hardware due to the intensive computation. To enable low-latency inference on resource-constrained hardware platforms, we propose to design Hardware-Aware Transformers (HAT) with neural architecture search. We first construct a large design space with arbitrary encoder-decoder attention and heterogeneous layers. Then we train a Super-Transformer that covers all candidates in the design space, and efficiently produces many SubTransformers with weight sharing. Finally, we perform an evolutionary search with a hardware latency constraint to find a specialized SubTransformer dedicated to run fast on the target hardware. Extensive experiments on four machine translation tasks demonstrate that HAT can discover efficient models for different hardware (CPU, GPU, IoT device). When running WMT'14 translation task on Raspberry Pi-4, HAT can achieve 3× speedup, 3.7× smaller size over baseline Transformer; 2.7× speedup, 3.6× smaller size over Evolved Transformer with 12,041× less search cost and no performance loss. HAT is open-sourced. | Transformer Nevertheless, it is challenging to deploy Transformers on mobile devices due to the high computation cost. For instance, in order to translate a sentence with only 30 words, a Transformer-Big model needs to execute 13G FLOPs and takes 20 seconds on a Raspberry Pi. Such long latency will hurt the user experience on edge devices. Thus we Figure need hardware-efficient Transformers (Figure Inspired by the success of Neural Architecture Search (NAS) We first construct a large search space with arbitrary encoder-decoder attention and heterogeneous Transformer layers. Traditional Transformer has an information bottleneck between the encoder and decoder. Arbitrary encoder-decoder attention breaks the bottleneck, allowing all decoder layers to attend to multiple and different encoder layers instead of only the last one. Thus low-level information from the encoder can also be used by the decoder. Motivated by Figure To perform a low-cost search in such a large design space, we first train a Transformer supernet -SuperTransformer, which contains many Sub-Transformers sharing the weights. We train all SubTransformers simultaneously by optimizing the uniformly sampled SubTransformers from the Su-perTransformer. The performance of a SubTransformer with inherited weights from the SuperTransformer can provide a good relative performance approximation for different architectures trained from-scratch. Unlike conventional NAS, we only need to pay the SuperTransformer training cost for once and can evaluate all the models in the design space with it. Finally, we conduct an evolutionary search to find the best SubTransformer under the hardware latency constraint. Experiments show that HAT can be naturally incorporated with model compression techniques such as quantization and knowledge distillation. We evaluate HAT with WMT'14 En-De, WMT'14 En-Fr, WMT'19 En-De, and IWSLT'14 De-En tasks on Raspberry Pi ARM CPU, Intel Xeon CPU, and Nvidia TITAN Xp GPU. Compared with previous work | An overview of the HAT framework is shown in Figure We construct a large design space by breaking two conventions in the Transformer design: (1) All decoder layers only attend to the last encoder layer; (2) All the layers are identical. Arbitrary Encoder-Decoder Attention. Different encoder layers extract features on different abstraction levels. Conventionally, all the decoder lay-ers only attend to the last encoder layer. It forms an information bottleneck that forces all the decoder layers to learn solely from the high abstraction level and ignore the low-level information. To break the bottleneck, we propose Arbitrary Encoder-Decoder Attention to learn the most suitable connections between the encoder and the decoder. Each decoder layer can choose multiple encoder layers to attend. The key and value vectors from encoder layers are concatenated in the sentence length dimension (Figure Transformers repeat one architecture for all layers. In HAT, instead, different layers are heterogeneous, with different numbers of heads, hidden dim, and embedding dim. In attention layers, different heads are used to capture various dependencies. However, In the FFN layer, the input features are cast to a higher dimension (hidden dim), followed by an activation layer. Traditionally, the hidden dim is set as 2× or 4× of the embedding dim, but this is sub-optimal since different layers need different capacities depending on the feature extraction difficulty. We hence make the hidden dim elastic. Moreover, we also support elastic embedding dim of encoder and decoder, but it is consistent inside encoder/decoder. The number of encoder & decoder layers are also elastic to learn the proper level of feature encoding and decoding. Other design choices such as the length of Q, K, V vectors in attention modules can be naturally incorporated in our framework, which we leave for future work. It is critical to have a large design space in order to find high-performance models. However, training all the models and comparing their BLEU scores is infeasible. We thus propose SuperTransformer, a supernet for performance approximation, which can judge the performance of a model without fully training it. The SuperTransformer is the largest model in the search space with weight sharing Given a latency requirement, we perform an evolutionary search to find a satisfactory SubTransformer. There are two ways to evaluate the hardware latency of a SubTransformer: (1) Online measurement in which we measure the models during the search process. (2) Offline, where we train a latency predictor to provide the latency. We apply the offline method here because it is fast and accurate. For the online method, a single sampled SubTransformer requires hundreds of inferences to get an accurate latency, which lasts for minutes and slows down the searching. For the offline method, we encode the architecture of a SubTransformer into a feature vector, and predict its latency instantly with a multi-layer perceptron (MLP). Trained with thousands of real latency data points, the predictor yields high accuracy (Figure We use an evolutionary algorithm to conduct the search process. As in Figure We conduct experiments on four machine translation tasks: WMT'14 En-De, WMT'14 En-Fr, WMT'19 En-De, and IWSLT'14 De-En, consisting of 4.5M, 36.3M, 43.0M, and 160K pairs of training sentences, respectively. For WMT'14 En-De, we apply 32K source-target BPE vocabulary, train on WMT'16, validate on newstest2013 and test on newstest2014, replicating Baselines. Our baseline models are Transformer Evaluation Metrics. For evaluation, we use beam four and length penalty 0.6 for WMT, and beam five for IWSLT We test the latency of the models by measuring translation from a source sentence to a target sentence with the same length. The length is the average output length on the test set -30 for WMT and 23 for IWSLT. For each model, we measure the latency for 300 times, remove the fastest and slowest 10% and then take the average of the rest 80%. We conduct experiments on three representative hardware platforms: Raspberry Pi-4 with an ARM Cortex-A72 CPU, Intel Xeon E5-2640 CPU, and Nvidia TITAN Xp GPU. SuperTransformer Setups. The SuperTransformer for WMT has the following design space: [512, 640] for embedding dim, Hardware-Aware Evolutionary Search Setups. The input of the latency predictor is a feature vector of SubTransformer architecture with ten elements: layer number, embed dim, average hidden dim, average self-attention heads, of both encoder and decoder; plus average encoder-decoder attention heads, and the average number of encoder layers each decoder layer attends. A dataset of 2000 (SubTransformer architecture, measured latency) samples for each hardware is collected, and split into train:valid:test=8:1:1. We normalize the features and latency, and train a three-layer MLP with 400 hidden dim and ReLU activation. We choose three-layer because it is more accurate than the one-layer model, and over three layers do not improve accuracy anymore. With the predictor, we conduct an evolutionary search for 30 iterations in the SuperTransformer, with population 125, parents population 25, mutation population 50 with 0.3 probability and crossover population 50. Training Settings. Our training settings are in line with In Figure We further compare various aspects of HAT with Transformer In Table Design Insights. For all HAT WMT models in Figure In Appendix Figure Ablation Study. HAT achieves higher BLEU with 1.5× lower latency and 1.5× smaller size compared with the largest SubTransformer (Table Neural Architecture Search. In the computer vision community, there has been an increasing interest in automating efficient model design with Neural Architecture Search (NAS) Since different hardware has distinct architecture and features We propose Hardware-Aware Transformers (HAT) framework to solve the challenge of efficient deployments of Transformer models on various hardware platforms. We conduct hardware-aware neural architecture search in an ample design space with an efficient weight-shared SuperTransformer, consuming four orders of magnitude less cost than the prior Evolved Transformer, and discover highperformance low-latency models. We hope HAT can open up an avenue towards efficient Transformer deployments for real-world applications. The larger the inherited val loss, the lower the trained from-scratch BLEU. The larger the inherited val loss, the lower the trained from-scratch BLEU. SubTransformer trained from-scratch BLEU (Target) A Appendix for "HAT: Hardware-Aware Transformers for Efficient Natural Language Processing" A.1 SubTransformer Performance Proxy In Figure A.2 Visualizations of Searched Models on WMT'14 En-De Task We show the HAT models searched for Raspberry Pi ARM Cortex-A72 CPU and Nvidia TITAN Xp GPU in Figure | 1,152 | 2,004 | 1,152 |
Math Word Problem Solving by Generating Linguistic Variants of Problem Statements | The art of mathematical reasoning stands as a fundamental pillar of intellectual progress and is a central catalyst in cultivating human ingenuity. Researchers have recently published a plethora of works centered around the task of solving Math Word Problems (MWP) -a crucial stride towards general AI. These existing models are susceptible to dependency on shallow heuristics and spurious correlations to derive the solution expressions. In order to ameliorate this issue, in this paper, we propose a framework for MWP solvers based on the generation of linguistic variants of the problem text. The approach involves solving each of the variant problems and electing the predicted expression with the majority of the votes. We use DeBERTa (Decoding-enhanced BERT with disentangled attention) as the encoder to leverage its rich textual representations and enhanced mask decoder to construct the solution expressions. Furthermore, we introduce a challenging dataset, PARAMAWPS, consisting of paraphrased, adversarial, and inverse variants of selectively sampled MWPs from the benchmark MAWPS dataset. We extensively experiment on this dataset along with other benchmark datasets using some baseline MWP solver models. We show that training on linguistic variants of problem statements and voting on candidate predictions improve the mathematical reasoning and robustness of the model. We make our code and data publicly available. | Math word problem solving is a long-standing research problem in Artificial General Intelligence (AGI) and a lot of studies about this topic, from both industry and academia, have been published recently. A typical Math Word Problem (MWP) takes the form of a written narrative that articulates a problem scenario and poses a question regarding one or more unknown quantities. A language model capable of solving such problems has Problem: 69 handbags are sold for $13 each. There are a total of 420 handbags in a boutique and the remaining handbags are sold for $7 each. How much did the boutique earn after selling all the handbags? Expression: x = 69 × 13 + (420 -69) × 7 Solution: 3354 Table to translate the human-readable problem statement to a valid mathematical expression that can be evaluated to obtain the numeric answer. An example of a classic MWP is portrayed in Table A lot of challenges manifest while designing an automated system for solving these problems To address the problem outlined in Table • We propose a framework for solving simple math word problems by generating paraphrased linguistic variants of the input problem statement using OpenAI's latest Generative Pre-trained Transformer (GPT-3) • We also generate a large, augmented version of the MAWPS DeBERTa (Decoding-enhanced BERT with disentangled attention) on the SVAMP dataset 3 Literature Review | The dawn of research on MWP solving was in the Currently, the landscape of Deep learning models for the MWP solving task is primarily comprised of five distinct paradigms, SEQ2SEQbased, SEQ2TREE-based, GRAPH2TREE-based, complex relation extraction-based, and Large Language Model (LLM) prompt-based approaches, each of which has demonstrated remarkable levels of performance and efficacy. With the advent of LLMs, many innovative prompt-based methods Paraphrase generation has garnered significant attention from various NLP approaches, encompassing rule-based methods Accordingly, our work attempts to leverage the strengths of GPT-3 to generate a more linguisti-cally diverse pool of problem statements to finetune a relatively smaller DeBERTa solver model on the downstream task of MWP solving which falls under the rubric of complex reasoning tasks. Figure-1 in Appendix-A shows an overview of our proposed architecture. Given a problem statement S, we prompt the paraphraser model to generate k linguistic variants of S which are, S 1 , S 2 , . . . , S k . These k variant problems along with the seed problem S consists of quantities that are tagged appropriately using quantity tags. Each of the k + 1 text sequences is then tokenized and the content embeddings H and positional embeddings P of the tokens are fed to the DeBERTa model. The disentangled self-attention mechanism of DeBERTa's encoder utilizes H and P to generate the output H output , which is a contextual representation of the content of each problem statement. H output , along with the relative positional embeddings P and absolute positional embeddings I of each of the problem statements are used by the Transformer layers of Enhanced Mask Decoder (EMD) of DeBERTa to generate the k + 1 predicted equations E 1 , E 2 , . . . , E k+1 . These equations are then simplified and the equation that is predicted the most number of times is elected as the final prediction of the model. This majority voting module is used only during the validation/testing phase and for inference. During the training phase, the k + 1 problem statements are deemed as stand-alone training samples and the Negative Log-Likelihood loss (NLLLoss) is calculated using the predicted equations and the ground-truth equation. Consequently, if the training set of the dataset used to train the model consists of n samples, it is as if the model is trained with (k + 1) × n = kn + n samples. The knowledge points gathered after being trained on an extra kn samples contributes to the robustness of the model. The task of correctly reformulating a Math Word Problem statement requires a good level of language understanding which is not present in its entirety in rule-based and data-driven methods of paraphrasing rendering them unsuitable in this case. These methods frequently yield incorrect, incoherent, and grammatically inaccurate linguistic variations; sometimes even leaving out crucial nu-merical information. Accordingly, we choose textdavinci-003 and gpt-3.5-turbo, two GPT-3 models from OpenAI, as the paraphrasing models. GPT-3 (Generative Pre-trained Transformer 3) The prompts that we use for accomplishing our linguistic variant generation task are, Here, the total number of linguistic variants of a problem, A detailed discussion on the types of problem variations is delineated in Section-5. All the quantities (written either numerically or in words) in every single variant of the problem along with the original problem itself, are tagged with unique quantity tags using RegEx and a Python script which is provided in our GitHub repository (see Section-1). This quantity tagging step ensures that the same quantity is present in both the input as well as in the output. The quantity-tagged tokens have their own content and positional embeddings. For example, if the problem statement is, "Melanie picked 4 plums, Dan picked 9 plums, and Sally picked 3 plums from the plum tree. How many plums were picked in total?" then the quantity-tagged version of the problem statement is, "Melanie picked [Q1] plums, Dan picked [Q2] plums, and Sally picked [Q3] plums from the plum tree. How many plums were picked in total?" We use this quantity tagging for the ground truth equation's quantities as well. We use the pre-trained language model DeBERTa (Decoding enhanced BERT with disentangled attention). DeBERTa is a newly developed neural language model by Contrary to BERT, which utilizes a vector representation for each word in the input layer by summing its content and position embeddings, in De-BERTa, every word is represented by two separate vectors that encode its content and position individually. The attention scores between words are computed using separate matrices that are disentangled based on the content and relative position of each word. This design choice is based on the observation that the attention weight between a pair of tokens is influenced by both their content and in tandem their relative positions. This especially holds paramount importance for the task of MWP solving as the relative positions of certain keywords in the problem statements dictate the solution. To represent a token x i located at a specific position i within a given sequence, it employs two dis-tinct vectors, H i and P i|j , which are respectively the content and relative positional representation vectors of x i with respect to a token x j at position j. The inter-token attention weights between x i and x j can be broken down into four constituent components, where, the four disentangled matrix attention scores represent their contents and positions as content-to-content (C2C), content-to-position (C2P), position-to-content (P2C), and position-toposition (P2P). The P2P portion of ( The self-attention mechanism described by where, R d×d are the projection weight matrices for the projected content vectors Q c , K c , V c respectively. Similarly, W r Q ∈ R d×d and W r K ∈ R d×d play the role of projection matrices for the projected relative position vectors Q r and K r . The metric to calculate the relative distance between tokens x i and x j is, which implies, δ(i, j) ∈ [0, 2k). Each element Āij of the attention matrix Ā denotes the attention score from token x i to the token x j and is computed using the vectors defined in (2) in the following manner, The attention score is yielded using the dotproduct of the query and key in the formula to let the model have an idea of how similar the key is to the query. The output of the self-attention mechanism, which is denoted by The result of the dot-product is normalized by dividing with √ 3d to avoid very hard softmax with small gradients, which is especially required for training stability in the case of large-scale PLMs He et al. ( Since there can be multiple valid equations for a single MWP, each of the k + 1 predictions from the decoder, E 1 , E 2 . . . , E k+1 , is simplified to a reduced normal form using the python package sympy 1 . These k + 1 simplified predictions, E ′ 1 , E ′ 2 . . . , E ′ k+1 , are then counted and the prediction that is the most frequent or that is yielded the most number of times is elected as the final answer of the whole solver model. It is to be noted that this voting mechanism is used only during the 1 5 Experiment We introduce a new large-scale dataset, namely PARAMAWPS (Paraphrased MAth Word Problem Solving Repository), consisting of 16,278 single equation MWPs. It is generated as a by-product of using one of the most commonly-used English MWP datasets, MAWPS • Changed phrase order -Variations with the order of the phrases being changed facilitate a break from the standard problem statement template where quantities are generally given before the question formulation. Having a changed ordering of phrases makes apriori question formulations more common. • Changed object and entity names -Object and entity names are altered with interchangeable alternatives (names, synonyms) in problem variations to prevent fixation on elements of the problem mostly agnostic to the process of solving the problem. It also serves to prevent an increase in density for similar terms that originate from the seed problem yielding good problem samples for language models • Added unrelated information -Some variations contain an extra phrase or quantity, or similar additions that are in excess of the information required to solve a problem and do not affect the original problem formulation in any meaningful way. These adversarial variations serve to obfuscate and familiarize the models with only the necessary information, enhancing deductive abilities • Inverted question -Some variations will take a previously known quantity and turn it into an unknown quantity while revealing the previous unknown quantity of the problem. This, in many cases, alters the question drastically, changing the needed calculations and equations, while keeping a roughly similar question body to the seed problem. Many of the seed problems used to generate variations from MAWPS pose sufficient difficulty to even SOTA MWP solvers and often contain numeric information embedded within the statement itself. An example is the following problem, "Mary, Sam, Keith, and Alyssa each have 6 marbles. How many marbles do they have in all?" This problem yields the equation "x = 4 × 6", despite the quantity 4 not being mentioned anywhere in the statement. This quantity had to be inferred from the other parts of the statement itself, namely, the 4 entities referred to in the statement; Mary, Sam, Keith, and Alyssa. Another such problem is, "When the price of diesel rose by 10%, a user reduced his diesel consumption by the same amount. How much would his diesel bill change in terms of percentage?" which yields the complex equation of "x = (1.0 -((1.0 + (10.0 × 0.01)) × (1.0 -(10.0 × 0.01)))) × 100.0". This problem, although seemingly simple on the surface in terms of quantities described, has several calculations dictated through the problem statement, some of which require additional realworld anecdotal knowledge, such as the conversion of percentages. Another problem with similar inferences of a more complex nature is, "Lauren wants to mix 5 liters of 7% milk with skim-milk (0% fat) to produce a mixture of 2.9787% milk. How much skim-milk should Lauren add?" yielding the equation "x = (7.0 × 0.01) × 5.0/(2.9787 × 0.01) -5.0", containing similar conversions of percentages, as well as additional knowledge of types of mixtures. Here, 7% milk is mixed with pure milk, or 100% milk. Yet the only indication that the milk is of 100% purity is nowhere to be seen in a direct capacity in the problem, but rather in a roundabout way -by referring to the amount of fat (0%) rather than the purity of the milk. Models have to infer a vast amount of real-world contextual knowledge to be able to solve such problems. Problems with seconddegree unknown quantities are also present as seed problems. For example, the problem "The Hudson River flows at a rate of 3 miles per hour. A patrol boat travels 60 miles upriver and returns in a total time of 9 hours. What is the speed of the boat in still water?" that yields the equation "(60.0/(x -3.0)) + (60.0/(3.0+x)) = 9.0", which is a quadratic equation. The problem itself deals with calculations of speed, which requires knowledge of how speed is calculated given certain quantities, as well as the effect of certain elements in the problem scenario on speed. We resort to this data generation approach due to the lack of large-scale, diverse, single-equation English MWP datasets. Other commonly-used benchmark datasets, MATH23K We implement the DeBERTa model using Microsoft's deberta-base that is publicly available in Hugging Face The superiority of the model's accuracy in PARAMAWPS over SVAMP, despite the demonstrably greater difficulty of the MWP samples in PARAMAWPS, indicates that training a language model on a more diverse set of linguistically varied problem statements leads to a better quality mathematical reasoning ability after the training phase. To gain insights into the individual contributions of the Paraphrasing Model and Voting Mechanism in conjunction with the DeBERTa model, we perform ablation studies. increasing the number of generated problem variants to infer the solution expressions of the problem samples in the MAWPS dataset's test set. Although there is a slight decrease in the accuracy for k = 5, we see a minuscule increase in accuracy for k = 10 and k = 15. In Table-5 we see the impact of the Voting Mechanism which contributed to a 5.4% increase on average in the accuracy of the DeBERTa model on the PARAMAWPS dataset. To test out the assertion made in other studies GPT-J (6B) 9.9 5.9 text-babbage-001 (6. One of the most capable models in the GPT-3.5 series of models is text-davinci-003, with 175 billion parameters and the ability to follow instructions consistently and produce lengthy outputs. However, the most capable and up-to-date model according to OpenAI is gpt-3.5-turbo, with 175 billion parameters, which is primarily optimized for chat completions but can be tweaked to follow more specific instructions similar to text-davinci-003. While all models used are instructed to output in a specific format -'Answer: [ANS]' with just the numerical value in the place of '[ANS]', the ability to do so consistently deteriorated with the models with relatively fewer parameters. Out of the base GPT-3 models, the 13 billion parameters text-curie-001 can output in the given format relatively consistently, text-babbage-001 with 6.7 billion parameters can occasionally produce the output in the correct format, but tries to generate full sentences more often than not, whereas the 350 million parameters text-ada-001 can barely generate a single output in the correct format, choosing to generate full sentences almost all of the time. Models tend to try to 'work through' the problem in text form rather than just generating the output, although with gpt-3.5-turbo this can be mostly mitigated by using very specific instructions for the prompt. The results in Table-6 and Table-3 support the current weakness of LLMs in mathematical reasoning tasks and the suitability of fine-tuning smaller models. It indicates the improvement in performance for a well-reasoning, but comparatively small model when it has the option to democratically choose from a substantial number of solution guesses. In this paper, we propose the idea of an MWP solving framework that utilizes the paraphrased linguistic variations of problem texts to train a De-BERTa model that generates candidate solution expressions and finalizes the predicted math expression by employing majority voting on a set of simplified candidate expressions. Our findings demonstrate that incorporating linguistic variants of problem statements during training and utilizing a voting mechanism for candidate predictions enhance the model's mathematical reasoning and overall robustness. We also introduce a large-scale, diverse, and challenging singleequation MWP dataset, PARAMAWPS, consisting of paraphrased, inverse, and adversarial variants of selectively sampled datapoints from MAWPS, as a formidable evaluation test-bed and a proper benchmark for training MWP solver models. We wish to experiment further with harder problem text variations (e.g. grammatical errors) and conduct a thorough error analysis of the models for identifying their lapses in mathematical reasoning and discovering more scopes of improvement. We also aim to expand our research to encompass the intricate realms of multi-equation, multi-step deduction, and domain-knowledge problems. We hope our approach and findings will pave the way to more scholarly works on the vistas of AGI and in tandem be deemed a noteworthy and meaningful contribution to this domain of research. There are still some avenues of improvement in our work. The temporal overhead due to the problem variant generation by the paraphraser model may make our proposed architecture unsuitable for real-world applications even though it takes merely 10 to 12 seconds to generate k = 5 variants for a single sample. Another limitation of our work is the absence of a proper tie-breaking strategy in our Majority Voting module. Furthermore, we need to introduce a system of weighted votes (e.g. semantic similarity scores as weights) so that the votes of wrongly predicted equations don't trump that of correctly generated predictions. We also plan to incorporate and experiment with the Tree-based decoder | 1,430 | 1,380 | 1,430 |
Integrated Learning of Dialog Strategies and Semantic Parsing | Natural language understanding and dialog management are two integral components of interactive dialog systems. Previous research has used machine learning techniques to individually optimize these components, with different forms of direct and indirect supervision. We present an approach to integrate the learning of both a dialog strategy using reinforcement learning, and a semantic parser for robust natural language understanding, using only natural dialog interaction for supervision. Experimental results on a simulated task of robot instruction demonstrate that joint learning of both components improves dialog performance over learning either of these components alone. | Natural language understanding and dialog management are two integral components of a dialog system. Current research typically deals with optimizing only one of these components. We present an approach to integrate the learning of both a dialog strategy using reinforcement learning, and a semantic parser for robust natural language understanding, using only natural dialog interaction for supervision. Research in dialog systems has primarily been focused on the problems of accurate dialog state tracking and learning a policy for the dialog system to respond appropriately in various scenarios. Dialogs are typically modeled using Partially Observable Markov Decision Processes (POMDPs), and various reinforcement learning algorithms have been proposed and evaluated for the task of learning optimal policies over these representations to accomplish user goals using as short and natural a dialog as possible Semantic parsing is the task of mapping natural language to a formal meaning representation. It has the potential to allow for more robust mapping of free-form natural language to a representation that can be used to interpret user intentions and track dialog state. This is done by leveraging the compositionality of meaning inherent in language. Prior work has shown that a semantic parser, incrementally updated from conversations, is helpful in dialogs for communicating commands to a mobile robot A major challenge with combining the above parser and dialog policy learning techniques is that reinforcement learning (RL) algorithms assume that the dialog agent is operating in a stationary environment. This assumption is violated when the parser is updated between conversations. For example, the improved semantic parser may be able to extract more information from a response to a question, which the old parser could not parse. So the RL algorithm may have earlier assumed that asking such a question is not useful, but this is not the case with the updated parser. Our results show that this effect can be mitigated if we break the allowed budget of training dialogs into batches, updating both parser and policy after each batch. As the next training batch gets collected using the updated parser, the policy can be updated using this experience to adapt better to it. We demonstrate, using crowd-sourced results with a simulated robot, that by integrating learning of both a dialog manager and a semantic parser in this manner, task success is improved over cases where the components are trained individually. | Prior work has used dialog to facilitate robot task learning, e.g. There has been considerable work in semantic parsing using both direct supervision in the form of annotated meaning representations There has also been considerable work in goaldirected dialog systems in domains such as information provision More recently, there has been work on modeling various components of a dialog system using neural networks A Partially Observable Markov Decision Process (POMDP) is a tuple (S, A, T, R, O, Z, γ, b 0 ), where S is a set of states, A is a set of actions, T is a transition function, R is a reward function, O is a set of observations, Z is an observation function, γ is a discount factor and b 0 is an initial belief state At any instant of time t, the agent is in a state s t ∈ S. This state is hidden from the agent and only a noisy observation o t ∈ O of s t is provided to it. The agent maintains a belief state b t which is a distribution over all possible states it could be in at time t, where b t (s i ) gives the probability of being in state s i at time t. Based on b t , the agent chooses to take an action a t ∈ A according to a policy π, commonly represented as a probability distribution over actions where π(a t |b t ) is the probability of taking action a t when the agent is in belief state b t . On taking action a t , the agent is given a real-valued reward r t , transitions to a state s t+1 , and receives a noisy observation o t+1 of s t+1 . State transitions occur according to the probability distribution P (s t+1 |s t , a t ) = T(s t , a t , s t+1 ), observations are related to the states by the probability distribution P (o t |s t , a t-1 ) = Z(o t , s t , a t-1 ) and rewards obtained follow the distribution P (r t |s t , a t ) = R(s t , a t , s t+1 ). The objective is to identify a policy π that is optimal in the sense that it maximizes the expected long term discounted reward, called return, given by While there exist both exact and approximate methods for solving POMDPs, these do not usually scale well to the state spaces commonly used in dialog domains. This has led to the development of approximate representations that exploit domain-specific properties of dialog tasks to allow tractable estimation of the belief state and policy optimization 4 Background -Q-Learning using Kalman Temporal Differences The quality of a policy π can be estimated using the action value function The optimal policy satisfies the Bellman equation, When the state space is very large or continuous, Q π cannot be computed for each state (or belief state) individually and is hence assumed to be a function with parameters θ over some features that represent the state. When the transition or reward dynamics are not constant (non-stationary problem), a suitable approximation is the Kalman Temporal Differences framework Filtering problems estimate hidden quantities X from related observations Y, modeling X and Y as random variables. When estimating action values, X corresponds to the function parameters, θ and the observations, Y, are the estimated returns, r t + γ max a Qθt (s t+1 , a). Random noise is added to both of these to allow for parameters to change over time. The update rules are derived from Kalman Filtering Theory and not included here for the sake of brevity. Our system initiates the dialog by requesting the user for a command. The user can command the system to perform two actions: navigation and delivery. Navigation has a single parameter for the destination. For example "go to Alice's office" would be a possible way to command the robot to perform a navigation command, whose location is a room that is the office of a person alice. Delivery has two parameters: the item to be delivered and the person to receive it. For example, "bring Alice a hamburger" would be a possible way to specify a delivery command whose patient is an item hamburger and recipient is a person alice. The robot makes an initial guess of the desired action from the user's response, and then may ask clarification questions in case of insufficient understanding. At each step, it can respond with one of four dialog acts: asking the user to repeat their command, confirming a command or an argument value, requesting a specific argument of a command, and executing an action (thereby ending the dialog). A sample dialog is shown in Table The dialog is considered a success if the final action taken is correct and a failure otherwise. The user also has the ability to prematurely end the dialog, and any conversation terminated in this manner is also considered a failure. Semantic parsing maps a natural language sentence such as "Go to Alice's office" to a logical form expressed in λ-calculus such as: Grounding against real-world knowledge, this will identify a room, say room 3512, which is an office that is owned by alice. This formalism reduces the number of lexical entries the system needs to learn by exploiting compositional reasoning over language. For example, if the system learns that "Alice Ashcraft" and "Alice" both refer to the entity alice, no further lexical entries are required to resolve "Go to Alice Ashcraft's office" to the same semantic form (1). In our system, semantic parsing is performed using probabilistic CKY-parsing with a Combinatory Categorial Grammar (CCG) and meanings associated with lexical entries. Perceptron-style updates to parameter values, that minimize the log-likelihood of the training data, are used during training to weight parses to speed search and give confidence scores in parse hypotheses The parser is trained using paired sentences and logical forms. A small supervised training set is used to initialize the parser. Training continues using pairs obtained through weak supervision collected from user dialogs We use two such types of training pairs. The first consist of responses that are likely to correspond to the complete action, and the logical form induced by the action executed by the robot at the end of the dialog. Such responses are expected from the initial prompt to the user and questions that ask the user to repeat the command. We obtain multiple semantic parses for these responses, and parses that correspond to a complete command, and ground to the action finally taken by the robot, are paired with the response to form one set of training pairs. For example, from the conversation in Table The second set of training pairs is obtained from the arguments of the action, such as the patient or location involved. This consists of responses to requests for specific arguments. Again, we consider multiple semantic parses for these responses, and select those that are of the correct syntactic form for a single argument value, and which ground to the corresponding argument value in the final action, to be paired with the response. For example, from the conversation in Table We use a POMDP to model dialog and learn a policy The key idea behind this approach is to group states into equivalence classes called partitions, and maintain a probability for each partition instead of each state. States within a partition are those that are indistinguishable to the system given the current dialog. More concretely, our belief state can be factored into two main components. The first is the action (such as navigation and delivery) and argument values of the goal (such as the patient or location) which the user is trying to convey, g = {g a , g P AT , g RCP , g LOC }. Goal parameters are represented in terms of semantic roles -patient (g P AT ), recipient (g RCP ) and location (g LOC ), to allow them to generalize across different actions. The second component contains information from the most recent user utterance, u = {u t , u a , u P AT , u RCP , u LOC }. Here, u t is the type of the utterance -affirmation, denial, providing information about a complete action, or providing information about a specific argument. The components u a , u P AT , u RCP and u LOC respectively refer to the action, patient, recipient and location mentioned in the most recent user utterance, any of which can be null. This representation allows the method to be applicable to any action that can be expressed using up to 3 arguments. After every user response, a beam of possible choices for u can be obtained by grounding the beam of top-ranked parses from the semantic parser. Semantic type-checking is used to disallow violations such as alice serving as the location argument of a navigation. However, there are a large number of possible values for g and we use the idea of partitions When an utterance hypothesis u is obtained, every partition currently maintained is split if needed into partitions that are either completely consistent or inconsistent with u. For example, if a partition p has goals containing both navigation and deliv-ery actions, and u specifies a delivery action, p will have to be split into one partition p 1 with all the navigation goals and another partition p 2 with all the delivery goals. The probability mass of p is divided between p 1 and p 2 in proportion to their sizes, to maintain the invariant that the probability of a partition is the sum of the probabilities of the goals contained in it. Then, given the previous system action m, The belief b(p, u) is calculated as in the HIS model as follows Here, P (u) is the probability of the utterance hypothesis u given the user response, which is obtained from the semantic parser. T (m, u) is the probability that the type of the utterance hypothesis u t is compatible with the previous system action m, for example, if the system asks for the confirmation of a goal, the expected type of response is either affirmation or denial. This is determined by system parameters. M (u, m, p) is a 0-1 value indicating whether the action and argument values mentioned in the utterance, system action, and partition agree with each other (an example of where they do not is an utterance mentioning an action not present in any goal in the partition) and b(p) is the belief of partition p before the update, obtained by marginalizing out u from b(p, u). k is a normalization constant that allows the expression to become a valid probability distribution. We also track the number of dialog turns so far. The belief state is a distribution over all possible hypotheses given the conversation so far. The HIS model allows tracking probabilities of the potentially large number of hypotheses. However, it is difficult to learn a policy over this large a state space in a reasonable number of dialogs. Thus, we learn a dialog policy over a summary state as in previous work It is important to note that while only the top two hypotheses are used by the policy to choose the next action, it is useful to maintain the belief of all hypotheses because a hypothesis that is initially of low probability may become the most probable after additional turns of dialog. Probability of top hypothesis Probability of second hypothesis Number of goals allowed by the partition in the top hypothesis Number of parameters of the partition in the top hypothesis, required by its action, that are uncertain (set to the maximum value if there is more than one possible action) Number of dialog turns used so far Do the top and second hypothesis use the same partition (0-1) Type of last user utterance Action of the partition in the top hypothesis, or null if this is not unique The choice of policy learning algorithm is important because learning POMDP policies is challenging and dialog applications exhibit properties not often encountered in other reinforcement learning applications • Low sample complexity in order to learn from limited user interaction. • An off-policy algorithm to enable the use of existing dialog corpora to bootstrap the system, and crowdsourcing platforms such as Amazon Mechanical Turk during training and evaluation. • A model-free rather than a model-based algorithm because it is difficult to design a good transition and observation model for this problem • Robustness to non-stationarity because the underlying language understanding component changes with time (Section 5.1), which is likely to change state transitions. To learn the policy, we provided a high positive reward for correct completion of the task and a high negative reward when the robot chose to execute an incorrect action, or if the user terminated the dialog before the robot was confident about taking an action. The system was also given a per-turn reward of -1 to encourage shorter dialogs. The learning methods described above were applied to improve an initial dialog system using weak supervision from dialog interaction with real users. The dialog system was initialized using data from the conversation logs of The semantic parser was initialized using a small seed lexicon and trained on a small set of supervised examples constructed using templates for commands gathered from the conversation logs. While the parser can be used even if initialized using only a handful of hand-coded training examples, the increased robustness obtained by training on templated sentences results in less frustrating interaction during initial dialogs. The RL component was first initialized with a Q-function approximation of the hand-coded policy of The simplest alternative to such an initialization would be to initialize the policy at random, but this would lead to a large number of frustrating dialogs before the system learns a reasonable policy. This can be avoided by training with a simulated user agent. However, such agents are not always realistic and their design requires parameters to be set ideally from existing conversation logs. However, since we use an off-policy algorithm, it is easier to train it directly from conversation logs, rather than develop a sufficiently realistic simulated agent. Since the KTD-Q algorithm is off-policy, it can be trained using tuples containing the belief state, action taken, next belief state, and reward obtained from these logs. We update the policy using such tuples both in the initial training phase from existing conversation logs, and when updating the policy after collecting batches of conversations in our experiments. Our experiments were done through Mechanical Turk as in previous work A final set of 100 test conversations were then conducted between Mechanical Turk users and the trained agents. These test tasks were novel in comparison to the training data in that although they used the same set of possible actions and argument values, the same combination of action and argument values had not been seen at training time. For example, if one of the test tasks involved delivery of a hamburger to alice, then there may have been tasks in the training set to deliver a hamburger to other people and there may have been tasks to deliver other items to alice, but there was no task that involved delivery of a hamburger to alice specifically. We compared four dialog agents. The first agent performed only parser learning (described in Section 5.1). Its dialog policy was always kept to be a hand coded dialog policy similar to that of The second agent performed only dialog strategy learning. Its parser was always kept to be the initial parser that all agents started out with. Its policy was incrementally updated after each training batch using the KTD-Q algorithm. The third agent performed both parser and dialog learning; but instead of incrementally updating the parser and policy after each batch, they were trained at the end of the training phase using dialogs across all batches. This would not allow the dialog manager to see updated versions of the parser in batches after the first and adapt the policy towards the improving parser. We refer to this as full learning of parser and dialog policy. The fourth agent also performed both parser and dialog learning. Its parser and policy were updated incrementally after each training batch. Thus for the next training batch, the changes due to the improvement in the parser from the previous batch could, in theory, be demonstrated in the dialogs and hence contribute towards updating the policy in a manner consistent with it. We refer to this as batchwise learning of parser and dialog policy. We did not include a system that performs no learning on either the parser or policy because it was shown by We hypothesized that the agent performing batchwise parser and policy learning would outperform the agents performing only parser or only dialog learning as we expect that improving both components is more beneficial. However, we did not necessarily expect the same result from full parser and dialog learning because it did not provide any chance to allow updates to propagate even indirectly from one component to another, exposing the RL algorithm to a more non-stationary environment. Hence, we also expected batchwise learning to outperform full learning. The agents were evaluated on the test set using the following objective performance metrics: the fraction of successful dialogs (see 5) and the length of successful dialogs. We also included a survey at the end of the task asking users to rate on a 1-5 scale whether the robot understood them, and whether they felt the robot asked sensible questions. Table Table As expected, the agent performing batchwise parser and dialog learning outperforms the agents performing only parser or only dialog learning, in the latter case by a large margin. We believe the agent performing only parser learning performs much better than the agent performing only dialog learning due to the relatively high sample complexity of reinforcement learning algorithms in general, especially in the partially observable setting. In contrast, the parser changes considerably even from a small number of examples. Also, we observe that full learning of both components does not in fact outperform only parser learning. We believe this is because the distribution of hypotheses obtained using the initial parser at training time is substantially different from that obtained using the updated parser at test time. We believe that batchwise training mitigates this problem because the distribution of hypotheses changes after each batch of training and the policy when updated at these points can adapt to some of these changes. The optimal size of the batch is a question for further experimentation. Using a larger batch is less likely to overfit updates to a single example but breaking the total budget of training dialogs into more batches allows the RL algorithm to see less drastic changes in the distribution of hypotheses from the parser. We include an experiment in the supplementary material that quantifies the accuracy improvement of the parsers after training from dialogs. It is more difficult to quantitatively compare the policies before and after learning. Qualitatively, one of the noticeable differences is that the system tends to confirm or act upon lower probability hypotheses than is recommended by the initial hand-coded policy. This is possibly because as the parser improves, its top hypotheses are more likely to be correct, even if they are associated with a lower confidence score from the parser. A demonstration of this can be seen in tables 4 and 5. The learned policy results in a shorter dialog in the same situation because it allows the agent to act upon a hypothesis of lower probability. Also, the learned policy is stochastic, which is very helpful when the agent is not able to understand the user at all. For example, if the agent is unable to parse any of the initial instructions from the user, under a handcoded policy, as its state has not changed, it would continue to repeat the question it had asked earlier, which prevents it from making any progress. However, in a stochastic policy, other more specific questions are likely to be substituted in between, and responses to these may allow the agent to make progress, which increases dialog success (table 6). In this work, we have demonstrated that continuous dialog strategy learning and semantic parser learning can be successfully combined in a dialog system to enable an agent to better understand commands provided in natural language. Both the semantic parser and the dialog strategy can be automatically improved simultaneously using weak feedback provided during interaction with users rather than manually-labeled or artificially constructed training data. Ongoing parser learning could have confused the RL dialog learner by altering the underlying language understanding system while it was searching for an effective dialog policy. However, our results show that by using an appropriate RL algorithm and batchwise training regimen, this potential difficulty can be avoided, and both language understanding and dialog management can be improved simultaneously. * indicates that the difference in performance between this and the Initial parser on the same metric is statistically significant according to a paired t-test with p < 0.05 and ˆindicates that the difference is trending significance (p < 0.1) . As expected, we observe that the initial parser (no learning) and the parser from the system performing only dialog learning, perform worse than the others, as the other systems update the parser used by these. The parser of the system performing only dialog learning is in fact a copy of the initial parser and was included only for completeness. Any difference in their performance is due to randomness. The parsers updated from dialogs improve in accuracy but the differences are found to be statistically significant only on Recall@10. The modest improvement is unsurprising given that the supervision provided is both noisy and weak. However, as seen in the main paper, even this modest improvement is sufficient to improve overall dialog success. Many NLP systems typically return a list of topn hypotheses, including semantic parsers. We use the entire beam of top-n parses when updating the state. This is expected to be beneficial in cases where that the correct hypothesis is not the top ranked but present in this beam. The following experiment demonstrates that using multiple parses when updating the state improves overall dialog success. We compared an agent that used the same parser and policy as in the batchwise training but only the top ranked parse from the parser to update its state, as opposed to a beam of parses when updating its state. These two systems differed in no other components. Dialog length 1 0.59 9.17 10 0.64 12.18 Table Table | 680 | 2,536 | 680 |
PANACEA: An Automated Misinformation Detection System on COVID-19 | In this demo, we introduce a web-based misinformation detection system PANACEA on COVID-19 related claims, which has two modules, fact-checking and rumour detection. Our fact-checking module, which is supported by novel natural language inference methods with a self-attention network, outperforms state-ofthe-art approaches. It is also able to give automated veracity assessment and ranked supporting evidence with the stance towards the claim to be checked. In addition, PANACEA adapts the bi-directional graph convolutional networks model, which is able to detect rumours based on comment networks of related tweets, instead of relying on the knowledge base. This rumour detection module assists by warning the users in the early stages when a knowledge base may not be available. | The dangers of misinformation have become even more apparent to the general public during the COVID-19 pandemic. Following false treatment information has led to a high number of deaths and hospitalisations In this work, we focus on automating misinformation detection using information from credible sources as well as social media. We produce a webbased tool that can be used by the general public to inspect relevant information about the claims that they want to check, see supporting or refuting evidence, and social media propagation patterns. For false information, the commonly used and relatively reliable method for automated veracity assessment is to check the claim against a verified knowledge base, which we call fact-checking. Previous works such as EVIDENCEMINER In addition to false information, truthful information can also be misused to harm competitors or gain attention on social media Previous work have either retrieved tweets from a short fixed time period A screencast video introducing the system PANACEA covers various types of misinformation detection related to COVID-19 with the following contributions: • We built a new web-based system, PANACEA, which is able to perform both fact-checking and rumour detection with natural language claims submitted by users. The system includes visualisations of various statistical analyses of the results for a better user understanding. • PANACEA performs automated veracity assessment and provides supporting evidence that can be ranked by various criteria, supported by novel natural language inference methods. The system is able to manage multiple user requests with low latency thanks to our development of a queuing system. • PANACEA is able to perform automated rumour detection by exploiting state-of-the-art research on propagation patterns. The system uses an annotated dataset and streams of COVID-19 tweets are collected to maintain an updated database. | The following datasets are used in the project: Knowledge Database This is used for factchecking, and includes COVID-19 related documents from selected reliable sources PANACEA Dataset In order to fine-tune our model, we constructed a new COVID-19 related propagation tree dataset for rumour detection. Similar previous datasets are Twitter15 and Twitter16 COVID Twitter Propagation Tree (Live) Besides the last dataset constructed for fine-tuning, PANACEA also runs a crawler to collect a stream of COVID-19 tweets that are used to maintain an updated database. This live dataset is not annotated, instead, it is labelled by the pre-trained rumour detection model. As the Twitter's search API does not allow retrieval of tweets beyond a week window, we retrieve COVID-19 related historical tweets based on the widely used dataset of COVID-19-TweetIDs 3 Architecture of PANACEA Figure (2) veracity assessment; and (3) supporting evidence retrieval. PANACEA also supports a unique function, rumour detection by propagation patterns, which has the following modules: (1) tweet retrieval; (2) rumour detection; and (3) tweet metainformation analysis. Supporting Evidence Retrieval This module includes three parts: document retrieval, sentence retrieval and corresponding meta-data generation. Multi-stage retrieval is applied, retrieving first the top 100 relevant documents with BM25, that then are re-ranked by MonoT5 Another approach to detecting rumours that has been found to be effective Claim-related tweets retrieval Similar to the fact-checking module, this module includes an autocomplete function for the user's natural language input claim that guesses the input from our claims dataset. The results for existing claims are also pre-computed to retrieve tweets faster. For a claim that is not in our claim dataset, we use BM25 to retrieve the related propagation trees from the large Twitter propagation tree database maintained by the active Twitter crawler. Rumour Assessment and Data Analysis PANACEA adapts a bi-directional graph convolutional networks model (BiGCN) Twitter propagation visualisation As shown in Figure 1. Tweet Count, showing the total number of tweets related to the input claim against the posting date, and aiming to reflect the total influence and scale of discussion of the claim. Fact-Checking We investigate the performance of our system in document retrieval and veracity assessment in 9 This paper introduces a web-based system on factchecking and rumour detection based on novel natural language processing models for COVID-19 misinformation detection. Going forward, we will keep updating the data and explore other methods for misinformation identification to improve the current system and introduce more functions to the system as part of our continuing efforts to support the general public to identify misinformation. | 783 | 1,936 | 783 |
CBNU System for SIGMORPHON 2019 Shared Task 2: a Pipeline Model | In this paper we describe our system for morphological analysis and lemmatization in context, using a transformer-based sequence to sequence model and a biaffine attention based BiLSTM model. First, a lemma is produced for a given word, and then both the lemma and the given word are used for morphological analysis. We also make use of character level word encodings and trainable encodings to improve accuracy. Overall, our system ranked fifth in lemmatization and sixth in morphological accuracy among twelve systems, and demonstrated considerable improvements over the baseline in morphological analysis. | In this paper we present our neural network architecture that we have used for the SIGMORPHON 2019 shared task 2 Hence, morphological analysis/tagging is a classification task for an input sequence. | There are two tasks in SIGMORPHON 2019 and we chose task 2. The idea of the task is simple: the input is a sentence made of words and the output is a lemma and morphosyntactic description (MSD) for each word. Table The dataset consists of initial 98 datasets of more than 60 distinct languages, and additional nine surprise languages/datasets that were added later. Some of the datasets consist of languages that are not widespread in terms of their usage and amount of available training data. For example, Akkadian has only 80 sentences in training data, and other low-resource languages similarly have small numbers of sentences: Amharic has 859, Bambara 820, Buryat 741, Cantonese 520, etc. On the other hand, Russian SynTagRus and Czech PDT respectively have 49,511 and 70,330 sentences in their training data. In addition to having less training data, some of the lowresource languages also do not have pre-trained word vectors. In such cases, we use other related languages' word vectors as a substitute, as will be discussed later. The baseline model This illustrates the importance of MSD tags in the lemmatization process. However, lemmatization can be done effectively even without consideration of morphological tags. Therefore, our approach flips the order of operations: we first find the lemma for a given word and input the original sentence with the generated lemma to the MSD tagger. Equation Overall, given the nature of the required tasks, an m-to-n sequence to sequence model for lemmatization and a label classifier model for morphological analysis are used. The two models are trained separately and pipelined as shown in Figure Our lemmatizer is a sequence to sequence model and is based on an encoder-decoder architecture using Google's transformer A more formal leaderboard for the GLUE benchmark The specific code for lemmatization is taken from the tensor2tensor library 1 version 1.13.4 with some modification added for our task. We chose the built-in hyperparameter configuration of transformer_tiny. The input and the output is a sequence of characters and no pre-trained embedding is used. One word is input at a time, and thus no consideration is taken of context words. For instance, in the mentioned example, the encoder input is "t h e s e" as a sequence of characters and the decoder output is "t h e s e". Likewise, "g u y s" and "g u y", "w e r e" and "b e", etc. are input and output one by one. Overall, the number of attention layers or heads is 4 as opposed to 8 in the original paper and hence it requires less computational power without substantial loss in the accuracy. The model performs quite well and with this basic setup was ranked fifth among 12 participating systems. The task of morphological analysis uses the output of lemmatization after pipelining it. Furthermore, MSD tagging is very similar to another well researched NLP task: headdependent relation labelling in dependency parsing. Like head-dependent relation labelling, an MSD tag of a word is dependent on the word itself and its position within the sentence. As an example, let's consider two sentences: "I live in an apartment" and "I like live music". Even though "live" occurs in both sentences, the label we attach is dependent on the context. In other words, context words and the word itself determine its MSD tag. Therefore, we use the modified dependency parser reported by The model's input is an elementwise addition of four embeddings for an input word. We then pass the vector representation for each input word through BiLSTM layers with subsequent multilayer perceptron (MLP) and biaffine attention layers. The MSD tagging assigns a tag to each word while the dependency parsing assigns a tag to a relation between a pair of words. In the latter case, even though we need to tag a relation between a pair of words, each word needs a label. Furthermore, information from two words only is not enough and the parser has to attend actually to the whole context to assign the correct label. Therefore, we need attention over all input words in the dependency parsing and we leave this feature for the MSD tagger too. The optimization is done by the Adam optimizer (Knigma and Ba 2014). We trained the model until there were no improvements after 5000 steps. The number of BiLSTM layers was three and the dimension of each LSTM cell as well as the word vector was 100 (300 when fastText 2 is used). We mainly used pre-trained embeddings of words from the CoNLL 2017 shared task For each word, there are four embeddings, which are summed elementwise: pre-trained, trainable, character level, and lemma. Trainable embeddings are vectors that are initialized randomly and then trained as the training proceeds. Likewise, lemma vectors are also initialized randomly. The process of character level embedding generation is more involved and is based on the character level word representation by After experiments with different hyperparameter settings, we were able to choose optimal settings, as was described earlier. Table Our choice of lemmatization followed by an MSD tagging was an important step for increasing MSD tag accuracy. Although, a full-scale ablation study was not performed due to time constraints, an experiment for MSD tagging without lemma on English-PUD and Korean-Kaist treebanks were performed. On both datasets, a decrease in accuracy was observed. For English-PUD's morph accuracy and F1 scores decreased by 1.18 and 0.43 percentage points, while Korean-Kaist's respective scores decreased by 7.50 and 8.41 percentage points. We conjecture that the larger decrease in Korean is due to its higher morphological complexity than English; a lemma itself is more important to find MSD tags for morphological rich languages. In general, as more training data were available, higher scores were obtained in absolute terms. As an example, for Russian, among four available datasets (Russian-GSD, Russian-PUD, Russian-SynTagRus, and Russian-Taiga) Russian-SynTagRus was the largest, and its accuracy was best by all four metrics used. Some languages have more MSD tags than others and therefore present another dimension for the task complexity. For instance, Czech-PDT treebank has 2895 unique MSD tags while English-EWT has only 179, i.e. 16 times less. This, therefore, partly affects the accuracy of the MSD tagger, where Czech-PDT treebank's morphological accuracy is 89.88% while English-EWT's is 95.82%. While there is a lot of variance in the number of MSD tags among languages, most of the languages have around twenty to sixty characters in their alphabet. Hence, the number of characters in the alphabet does not seem to affect lemmatization. At the same time, Chinese uses distinct characters for each word and does not have word inflections. Despite having 3536 unique characters, Chinese-GSD treebank's lemma accuracy is 99.98%. It also has only 40 MSD tags due to the absence of inflections. Overall, lemmatization appears to be a slightly easier task than MSD tagging, and in our case, incorporating lemma information in MSD tagging yielded more accurate results for the latter. Our pipeline model has shown favorable results in SIGMORPHON Shared Task 2 and scored fifth and sixth place, respectively, for lemmatization and MSD tagging. For future work, it would be interesting to assess how incorporating the output of MSD tagging into lemmatization would affect lemma accuracy. | 608 | 198 | 608 |
From Discourse to Narrative: Knowledge Projection for Event Relation Extraction | Current event-centric knowledge graphs highly rely on explicit connectives to mine relations between events. Unfortunately, due to the sparsity of connectives, these methods severely undermine the coverage of EventKGs. The lack of high-quality labelled corpora further exacerbates that problem. In this paper, we propose a knowledge projection paradigm for event relation extraction: projecting discourse knowledge to narratives by exploiting the commonalities between them. Specifically, we propose Multi-tier Knowledge Projection Network (MKPNet), which can leverage multitier discourse knowledge effectively for event relation extraction.In this way, the labelled data requirement is significantly reduced, and implicit event relations can be effectively extracted. Intrinsic experimental results show that MKPNet achieves the new state-of-the-art performance, and extrinsic experimental results verify the value of the extracted event relations. | Event-centric knowledge graphs (EventKGs) model the narratives of the world by representing events and identifying relations between them, which are critical for machine understanding and can benefit many downstream tasks, such as question answering Recently, semi-automatically constructing Even-tKGs have gained much attention Unfortunately, the connective-based approaches face the critical coverage problem due to the sparsity of connectives. That is, a large proportion of event pairs are not connected with explicit connectives, but with underlying event relations. We denote them as implicit event relations. Further-more, the related events can even not close to each other in a document. For the example in Figure In this paper, we propose a new paradigm for event relation extraction -knowledge projection. Instead of relying on sparse connectives or building classifiers starting from scratch, we project discourse knowledge to event narratives by exploiting the anthropological linguistic connections between them. Enlightened by Specifically, we design Multi-tier Knowledge Projection Network (MKPNet), which can leverage multi-tier discourse knowledge effectively for event relation extraction. MKPNet introduces three kinds of adaptors to project knowledge from discourses into narratives: (a) token adaptor for tokenlevel knowledge projection; (b) semantic adaptor for semantic-level knowledge projection; (c) coarse category adaptor for label-level knowledge projection. By sharing the parameters of these three adaptors, the commonalities between discourses and narratives at various levels can be effectively explored. Therefore, we can obtain more general token representations, more accurate semantic representations, and more credible coarse category representations to better predict event relations. We conduct intrinsic experiments on ASER The main contributions of this paper are: • We propose a new knowledge projection paradigm, which can effectively leverage the commonalities between discourses and narratives for event relation extraction. • We design MKPNet, which can effectively leverage multi-tier discourse knowledge for event relation extraction via token adaptor, semantic adaptor and coarse category adaptor. • Our method achieves the new SotAevent relation extraction performance, and an enriched EventKG is released by extracting both explicit and implicit event relations. We believe it can benefit many downstream NLP tasks. | Event Relation Extraction (ERE). connectives, while implicit relations lack these surface cues. To resolve the implicit discourse relation recognition (IDRR) task, researchers construct high-quality labelled datasets Associations between Discourse and Narrative. Recent NLP studies have proved that discourse and narratives closely interact with each other, and leveraging discourse knowledge benefits narrative analysis significantly, such as subevents detection In this section, we describe how to learn an effective event relation extractor by projecting resourcerich discourse knowledge to the resource-poor narrative task. Specifically, we propose Multi-tier Knowledge Projection Network (MKPNet) which can effectively leverage multi-tier discourse knowledge for implicit event relation extraction. Figure For knowledge projection, we model both event relation extraction (ERE) and discourse relation recognition (DRR) as an instance-pair classification task For ERE, the input is an event pair such as <E 1 : "PER goes to the restaurant", E 3 : "PER is so hungry"> and the output is an event relation such as Reason. For DRR, the input is a clause pair such as <D 1 : "Tom goes to the restaurant", D 3 :"he is so hungry"> and the output is a discourse relation such as Cause. Specifically, MKPNet extends the SotADRR model -BERT-CLS where ⊕ means the concatenation operation. In this way, the parameters of MKPNet can be grouped by {θ BERT , θ Semantic , θ Coarse , θ F ine }, where θ BERT for BERT-based token encoder, θ Semantic for VAE-based semantic encoder, θ Coarse for coarse category encoder and θ F ine for the final relation classifier layer respectively. Recent studies have shown that similar tasks usually share similar lexical and syntactic structures and therefore lead to similar token representations Specifically, given an event pair <E 1 , E 2 >, we represent it as a sequence: where[CLS] and [SEP] are special tokens. For each token in the input, its representation is constructed by concatenating the corresponding token, segment and position embeddings. Then, the event pair representation will be inputted into BERT architecture The token-level discourse pair representation h d [CLS] can be obtained in the same way for DRR. To project the token-level knowledge, we use the same BERT for event pair and discourse pair encoding. During the optimization process, it is fine-tuned using the supervision signals from both ERE and DRR. Because narrative and discourse analyses need to accurately represent the deeper semantic of the instance pairs, the shallow token-level knowledge captured by the BERT-based token encoder is not enough. However, BERT always induces a nonsmooth anisotropic semantic space which is adverse for semantic modelling of large-grained linguistic units To address this issue, we introduce an variational autoencoder-based (VAE-based) semantic encoder to represent the semantics of both events and clauses by transforming the anisotropic semantic distribution to a smooth and isotropic Gaussian distribution ), and dashed lines to denote the variational approximation Both variational parameters and generative parameters are learned jointly. Specifically, VAE is a directed graphical model with the generative model P and the variational model Q, which learns the semantic representation h z of the input by an autoencoder framework. Figure or h d [CLS] and h Y can be h e Y or h d Y according to the different tasks. We 1) first obtain the inputand output-side representations via the shared BERT-based token encoder and the individual relation embedding networks, i.e., h [CLS] and h Y ; 2) then perform a non-linear transformation that project them onto the semantic space: 3) obtain the above-mentioned Gaussian parameters µ and logσ 2 through linear regression: where W and b are the parameter matrix and bias term respectively; 4) use a reparameteriza-tion trick where ∼ N (0, I) and h z can be h e z or h d z . The neural model for the prior p(h z |h During testing, due to the absence of the outputside representation h Y , we set h z to be the mean of p(h z |h [CLS] ) To project the semantic-level knowledge, we use the same VAE for both event pair and discourse pair. Therefore, the commonalities of event semantics and discourse semantics can be captured more accurately. The token adaptor and the semantic adaptor commendably cover the knowledge entailed on the input-side. In addition, we found that ERE and DRR share the same coarse-grained categories: Temporal, Contingency, Comparison and Expansion To this end, we design the coarse category adaptor in a coarse-to-fine framework Specifically, we first use the token representation h [CLS] and the semantic representation h z to predict the coarse-grained labels: where Y c ∈ {Temporal, Contingency, Comparison, Expansion}. After that, we use the coarse label embedding network to obtain the corresponding coarse-grained label embedding h Y c , which is referred as the coarse category representation. To project that label-level knowledge, we use the same coarse-grained classifier and the same coarse label embedding network. During the optimization process, both event instances and discourse instances can be used to train this coarse category encoder. The more supervision signals make it more effective. In this paper, we utilize multi-task learning where Y can be Y im or Y d according to the different tasks, λ, α are two hyperparameters, KL(P ||Q)) is the KL divergence in the semantic encoder, L(θ; Y ) and L(θ; Y c ) are fine-grained and coarse-grained objectives respectively: It should be noticed that in MKPNet, {θ BERT , θ Semantic , θ Coarse } are the shared parameters of the BERT-based token encoder, the VAE-based semantic encoder and the coarse category encoder between ERE and DRR. And {θ F ine } are separated parameters of the fine-grained ERE and DRR classifiers. We conduct intrinsic experiments on ASER Datasets. For discourse relation recognition (DRR), we use PDTB 2.0 Baselines. For ERE, we compare the proposed MKPNet with the following baselines: • Baselines w/o Discourse Knowledge are only trained on IERE training set. We choose the BERT-CLS as the representative of them due to its SotAperformance. • Baselines with Discourse Knowledge improve the learning of ERE via transfer learning For DRR, we compare the proposed MKPNet with the following baselines: • Bai and Zhao ( • 1. Based on MKPNet, we enrich the original ASER by abundant implicit event relations. Considering the computational complexity, we classify the event pairs co-occurrence in the same document 3 FrameNet 2. The proposed MKPNet achieves SotAperformance for ERE. MKPNet can significantly outperform the BERT-Transfer and achieves 55.86 accuracy and 55.36 F1. MKPNet w/o KP obtains considerable performance improvements when com-pared with BERT-CLS. We believe this is because MKPNet fully explores the knowledge on different tiers, and modelling knowledge tier-by-tier is effective. 3. By projecting knowledge at token-level, semantic level and label level, all three adaptors are useful and are complementary with each other. When compared with the full model MKP-Net, its four variants show declined performance in different degrees. MKPNet outperforms MKPNet w/o CA 0.72 accuracy and 0.94 F1, which indicates that our coarse category adaptor successfully bridges the gap of heterogeneous fine-grained targets. MKPNet outperforms MKPNet w/o SA 0.57 accuracy and 0.44 F1, and therefore we believe that our latent semantic adaptor is helpful for capture the semantic-level commonalities. Finally, there is a significant decline between MKPNet w/o KP and MKPNet w/o SA & CA, which means that token adaptor is indispensable. The insight in those observations is that the commonalities between discourses and narratives under the hierarchical structure, thus projecting them at different levels is effective, and three adaptors can be complementary with each other. 4. The commonalities between discourses and narratives are beneficial for both ERE and DRR. Compared with the baselines w/o discourse knowledge -BERT-CLS and MKPNet w/o KP, both the naive transfer method -BERT-Transfer and our MKPNet achieve significant performance improvements: BERT-Transfer gains 1.29 accuracy and 1.20 F1 when compared to BERT-CLS, and MKP-Net gains 1.92 accuracy and 1.84 F1 when compared to MKPNet w/o KP. Besides, for DRR, our method MKPNet also substantially outperforms the other baselines and its variant MKPNet w/o KP. These results verified the commonalities between discourse knowledge and narrative knowledge. Effects of Semantic-level Knowledge and Labellevel Knowledge. In these experiments, we compare the performance of our models, MKPNet, MKPNet w/o CA and MKPNet w/o SA with or without knowledge projection to find out the effects of semantic-level knowledge and label-level knowledge. From Tradeoff between Dataset Quality and Size. As described above, the IERE training dataset is constructed using the most confident instances in ASER core version. We can construct a larger but lower quality dataset by incorporating more instances with lower confidence, i.e., the quality-size tradeoff problem. To analyze the tradeoff between the quality and size, we construct a set of datasets with different sizes/qualities, and Figure The above intrinsic experiments verified the effectiveness of the proposed MKPNet for ERE. In this section, we use the core version of our enriched EventKGs -ASER++, and then conduct extrinsic experiments on Winograd Schema Challenge (WSC) • Pure Knowledge-based Methods are heuristical rule-based methods, such as Knowledge Hunting • Language Model-based Methods use language model trained on large-scale corpus and tuned specifically for the WSC task, such as LM (Trinh and Le, 2018). • External Knowledge Enhanced Methods are models based on BERT and trained with the different external knowledge resource, e.g., Event-centric Knowledge Graphs. Knowledge graphs have come from entity-centric ones Knowledge Transfer. Due to the data scarcity problem, many knowledge transfer studies have been proposed, including multi-task learning In this paper, we propose a knowledge projection paradigm for event relation extraction and Multitier Knowledge Projection Network (MKPNet) is designed to leverage multi-tier discourse knowledge. By effectively projecting knowledge from discourses to narratives, MKPNet achieves the new state-of-the-art event relation extraction performance, and extrinsic experimental results verify the value of the extracted event relations. For future work, we want to design new data-efficient algorithms to learn effective models using low-quality and heterogeneous knowledge. | 949 | 2,467 | 949 |
Subsets and Splits